简体   繁体   English

Kafka结构化流检查点

[英]Kafka Structured Streaming checkpoint

I am trying to do structured streaming from Kafka. 我正在尝试从Kafka中进行结构化流式传输。 I am planning to store checkpoints in HDFS. 我打算将检查点存储在HDFS中。 I read a Cloudera blog recommending not to store checkpoints in HDFS for Spark streaming. 我读了一个Cloudera博客,建议不要在HDFS中将检查点存储在Spark流中。 Is it same issue for structure streaming checkpoints. 结构流检查点是否存在相同问题? https://blog.cloudera.com/blog/2017/06/offset-management-for-apache-kafka-with-apache-spark-streaming/ . https://blog.cloudera.com/blog/2017/06/offset-management-for-apache-kafka-with-apache-spark-streaming/

In structured streaming, If my spark program is down for certain time, how do I get latest offset from checkpoint directory and load data after that offset. 在结构化流中,如果我的spark程序在特定时间内关闭,如何从检查点目录中获取最新的偏移量,并在该偏移量之后加载数据。 I am storing checkpoints in a directory as shown below. 我将检查点存储在目录中,如下所示。

 df.writeStream\
        .format("text")\
        .option("path", '\files') \
        .option("checkpointLocation", 'checkpoints\chkpt') \
        .start()

Update: 更新:

This is my Structured streaming program reads a Kafka message, decompresses and writes to HDFS. 这是我的结构化流程序,读取Kafka消息,解压缩并写入HDFS。

df = spark \
        .readStream \
        .format("kafka") \
        .option("kafka.bootstrap.servers", KafkaServer) \
        .option("subscribe", KafkaTopics) \
        .option("failOnDataLoss", "false")\
         .load()
Transaction_DF = df.selectExpr("CAST(value AS STRING)")
Transaction_DF.printSchema()

decomp = Transaction_DF.select(zip_extract("value").alias("decompress"))
#zip_extract is a UDF to decompress the stream

query = decomp.writeStream\
    .format("text")\
    .option("path", \Data_directory_inHDFS) \
    .option("checkpointLocation", \pathinDHFS\) \
    .start()

query.awaitTermination()

Storing Checkpoint on longterm storage(HDFS, AWS S3,etc.) are most preferred. 首选将Checkpoint存储在长期存储(HDFS,AWS S3等)上。 I would Like to add one point here that the property "failOnDataLoss" should not be set to false as it is not best practice. 我想在此补充一点,不应将属性“ failOnDataLoss”设置为false,因为这不是最佳实践。 Data loss is something no one would like to afford. 数据丢失是没人愿意承担的。 Rest you are on the right Path. 休息,您在正确的道路上。

In your query, try applying a checkpoint while writing results to some persistent storage like HDFS in some format like parquet. 在查询中,尝试在将结果以镶木地板等某种格式写入某些持久性存储(如HDFS)时应用检查点。 It worked good for me. 对我来说很好。

As I understood the artificial it recommend maintaining the offset management either in: Hbase, Kafka, HDFS or Zookeeper. 据我了解,它建议在以下位置维护偏移量管理:Hbase,Kafka,HDFS或Zookeeper。

"It is worth mentioning that you can also store offsets in a storage system like HDFS. Storing offsets in HDFS is a less popular approach compared to the above options as HDFS has a higher latency compared to other systems like ZooKeeper and HBase." “值得一提的是,您还可以将偏移量存储在HDFS之类的存储系统中。与上述选项相比,将偏移量存储在HDFS中是一种不太流行的方法,因为与其他系统(如ZooKeeper和HBase)相比,HDFS的延迟更高。”

you can find in Spark Documentation how to restart a query from an existing checkpoint at: http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#recovering-from-failures-with-checkpointing 您可以在Spark文档中找到如何从现有检查点重新启动查询, 网址为: http : //spark.apache.org/docs/latest/structured-streaming-programming-guide.html#recovering-from-failures-with-checkpointing

In structured streaming, If my spark program is down for certain time, how do I get latest offset from checkpoint directory and load data after that offset. 在结构化流中,如果我的spark程序在特定时间内关闭,如何从检查点目录中获取最新的偏移量,并在该偏移量之后加载数据。

Under your checkpointdir folder you will find a folder name 'offsets'. 在您的checkpointdir文件夹下,您会找到一个名为“ offsets”的文件夹。 Folder 'offsets' maintain the next offsets to be requested from kafka. 文件夹“偏移”会保留下一个要从kafka请求的偏移。 Open the latest file(latest batch file) under 'offsets' folder, the next expected offsets will be in format below 打开“偏移量”文件夹下的最新文件(最新批处理文件),下一个预期的偏移量将采用以下格式

{"kafkatopicname":{"2":16810618,"1":16810853,"0":91332989}}

To load data after that offset, set below property to your spark read stream 要在该偏移量之后加载数据,请将以下属性设置为您的火花读取流

 .option("startingOffsets", "{\""+topic+"\":{\"0\":91332989,\"1\":16810853,\"2\":16810618}}")

0,1,2 are the partitions in topic. 0,1,2是主题中的分区。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM