简体   繁体   English

pyspark流提交提交偏移到kafka

[英]pyspark streaming commit offset to kafka

According to documentation it is possible to commit offset into kafka from (scala) spark streaming application. 根据文档,可以从(scala)spark流应用程序将偏移量提交到kafka中 I would like to achieve the same functionality from pyspark. 我想从pyspark实现相同的功能。
Or at least store the kafka partition, offset into external datastore (RDBMS, etc). 或至少将kafka分区存储到外部数据存储区(RDBMS等)中。

However the pyspark api for kafka integration only provides RDD(offset, value)] instead of RDD[ConsumerRecord] (as in scala). 但是,用于kafka集成的pyspark api仅提供RDD(offset, value)]而不是RDD[ConsumerRecord] (如在scala中)。 Is there any way to obtain (topic, partition, offset) from the python RDD? 有什么办法可以从python RDD获取(topic, partition, offset) And persist it else where? 并坚持到其他地方?

We can handle offset in multiple manner . 我们可以以多种方式处理偏移量。 One of the way we can store the Offset value in Zookeeper path in every successful processing data and read that value when we creating the stream again . 在每一个成功处理的数据中,我们可以在Zookeeper路径中存储Offset值并在再次创建流时读取该值的一种方法。 Code snippet as below . 代码片段如下。

from kazoo.client import KazooClient
zk = KazooClient(hosts='127.0.0.1:2181')
zk.start()
ZOOKEEPER_SERVERS = "127.0.0.1:2181"

def get_zookeeper_instance():
    from kazoo.client import KazooClient
    if 'KazooSingletonInstance' not in globals():
        globals()['KazooSingletonInstance'] = KazooClient(ZOOKEEPER_SERVERS)
        globals()['KazooSingletonInstance'].start()
    return globals()['KazooSingletonInstance']

def save_offsets(rdd):
    zk = get_zookeeper_instance()
    for offset in rdd.offsetRanges():
        path = f"/consumers/{var_topic_src_name}"
        print(path)
        zk.ensure_path(path)
        zk.set(path, str(offset.untilOffset).encode())

    var_offset_path = f'/consumers/{var_topic_src_name}'

    try:
        var_offset = int(zk.get(var_offset_path)[0])
    except:
        print("The spark streaming started First Time and Offset value should be Zero")
        var_offset  = 0
    var_partition = 0
    enter code here
    topicpartion = TopicAndPartition(var_topic_src_name, var_partition)
    fromoffset = {topicpartion: var_offset}
    print(fromoffset)
    kvs = KafkaUtils.createDirectStream(ssc,\
                                        [var_topic_src_name],\
                                        var_kafka_parms_src,\
                                        valueDecoder=serializer.decode_message,\
                                        fromOffsets = fromoffset)
    kvs.foreachRDD(handler)
    kvs.foreachRDD(save_offsets)

Regards 问候

Karthikeyan Rasipalayam Durairaj Karthikeyan Rasipalayam Durairaj

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM