简体   繁体   English

如何流式传输单个 kafka 主题,按键过滤到 hdfs 的多个位置?

[英]How to stream a single topic of kafka , filter by key into multiple location of hdfs?

I am not being to stream my data on multiple hdfs location , which is filtered by key.我不会在多个 hdfs 位置上流式传输我的数据,这些位置是通过键过滤的。 So below code is not working.所以下面的代码不起作用。 Please help me to find the correct way to write this code请帮助我找到编写此代码的正确方法

    val ER_stream_V1 = spark
        .readStream
        .format("kafka")
        .option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
        .option("subscribe", "Topic1")
        .option("startingOffsets", "latest")
        .option("failOnDataLoss", "false")
        .load()
val ER_stream_V2 = spark
        .readStream
        .format("kafka")
        .option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
.option("subscribe", "Topic1")
        .option("startingOffsets", "latest")
        .option("failOnDataLoss", "false")
        .load()

        ER_stream_V1.toDF()
        .select(col("key"), col("value").cast("string"))
        .filter(col("key")==="Value1")
        .select(functions.from_json(col("value").cast("string"), Value1Schema.schemaExecution).as("value")).select("value.*")
        .writeStream
        .format("orc")
        .option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
        .option("checkpointLocation", "/tmp/teststreaming/execution/checkpoint2005")
        .option("path", "/tmp/test/value1")
        .trigger(Trigger.ProcessingTime("5 Seconds"))
        .partitionBy("jobid")
        .start()

        ER_stream_V2.toDF()
        .select(col("key"), col("value").cast("string"))
        .filter(col("key")==="Value2")
        .select(functions.from_json(col("value").cast("string"), Value2Schema.schemaJobParameters).as("value"))
        .select("value.*")
        .writeStream
        .format("orc")
        .option("metastoreUri", configManager.getString("spark.datasource.hive.warehouse.metastoreUri"))
        .option("checkpointLocation", "/tmp/teststreaming/jobparameters/checkpoint2006")
        .option("path", "/tmp/test/value2")
        .trigger(Trigger.ProcessingTime("5 Seconds"))
        .partitionBy("jobid")
        .start()

You should not need two readers.你不应该需要两个读者。 Create one and filter twice.创建一个并过滤两次。 You might also want to consider startingOffsets as earliest to read existing topic data您可能还需要考虑将startingOffsets earliest用于读取现有主题数据

For example.例如。

val ER_stream = spark
    .readStream
    .format("kafka")
    .option("kafka.bootstrap.servers", configManager.getString("Kafka.Server"))
    .option("subscribe", "Topic1")
    .option("startingOffsets", "latest")  // maybe change?
    .option("failOnDataLoss", "false")
    .load()
    .toDF()
    .select(col("key").cast("string").as("key"), col("value").cast("string"))

val value1Stream = ER_stream
    .filter(col("key") === "Value1")
    .select(functions.from_json(col("value"), Value1Schema.schemaExecution).as("value"))
    .select("value.*")

val value2Stream = ER_stream
    .filter(col("key") === "Value2")
    .select(functions.from_json(col("value"), Value2Schema.schemaJobParameters).as("value"))
    .select("value.*")

value1Stream.writeStream.format("orc")
    ...
    .start()

value2Stream.writeStream.format("orc")
    ...
    .start()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM