簡體   English   中英

無法在Spark結構化流中執行多個查詢

[英]Not able to execute multiple queries in Spark structured streaming

我創建了一個示例代碼來執行多個查詢。 但是我只得到第一個查詢的輸出。 在日志中,我能夠看到所有查詢都在運行。不確定我在做什么錯。

public class A extends D implements Serializable {

    public Dataset<Row> getDataSet(SparkSession session) {
        Dataset<Row> dfs = session.readStream().format("socket").option("host", hostname).option("port", port).load();
        publish(dfs.toDF(), "reader");
        return dfs;
    }

}

public class B extends D implements Serializable {

    public Dataset<Row> execute(Dataset<Row> ds) {
       Dataset<Row> d = ds.select(functions.explode(functions.split(ds.col("value"), "\\s+")));
        publish(d.toDF(), "component");
        return d;
    }
}

public class C extends D implements Serializable {

    public Dataset<Row> execute(Dataset<Row> ds) {

        publish(inputDataSet.toDF(), "console");
        ds.writeStream().format("csv").option("path", "hdfs://hostname:9000/user/abc/data1/")
                .option("checkpointLocation", "hdfs://hostname:9000/user/abc/cp").outputMode("append").start();
        return ds;
    }

}

public class D {

    public void publish(Dataset<Row> dataset, String directory) {
        dataset.writeStream().format("csv").option("path", "hdfs://hostname:9000/user/abc/" + directory)
                .option("checkpointLocation", "hdfs://hostname:9000/user/abc/checkpoint/" + directory).outputMode("append")
                .start();

    }
}

public static void main(String[] args) {

    SparkSession session = createSession();
    try {
        A a = new A();
        Dataset<Row> records = a.getDataSet(session);

        B b = new B();
        Dataset<Row> ds = b.execute(records);

        C c = new C();
        c.execute(ds);
        session.streams().awaitAnyTermination();
    } catch (StreamingQueryException e) {
        e.printStackTrace();
    }
}

問題是由於您正在從中讀取輸入套接字。Spark套接字源打開了兩個與nc的連接(即,因為您有兩個啟動)。 nc的局限性是它只能將數據饋送到一個連接。對於其他輸入源,您的查詢應該可以正常運行。 查看相關問題: 在Spark結構化流中執行單獨的流查詢

嘗試了如下所示的簡單測試,並打印了兩個輸出:

 val df1 = spark.readStream.format("socket").option("host","localhost").option("port",5430).load()

  val df9 = spark.readStream.format("socket").option("host","localhost").option("port",5431).load()


  val df2 = df1.as[String].flatMap(x=>x.split(","))

  val df3 = df9.as[String].flatMap(x=>x.split(",")).select($"value".as("name"))

 val sq1 = df3.writeStream.format("console").queryName("sq1")
    .option("truncate","false").trigger(Trigger.ProcessingTime(10 second)).start()

  val sq = df2.writeStream.format("console").queryName("sq")
    .option("truncate","false").trigger(Trigger.ProcessingTime(20 second)).start()

spark.streams.awaitAnyTermination()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM