繁体   English   中英

无法在Spark结构化流中执行多个查询

[英]Not able to execute multiple queries in Spark structured streaming

我创建了一个示例代码来执行多个查询。 但是我只得到第一个查询的输出。 在日志中,我能够看到所有查询都在运行。不确定我在做什么错。

public class A extends D implements Serializable {

    public Dataset<Row> getDataSet(SparkSession session) {
        Dataset<Row> dfs = session.readStream().format("socket").option("host", hostname).option("port", port).load();
        publish(dfs.toDF(), "reader");
        return dfs;
    }

}

public class B extends D implements Serializable {

    public Dataset<Row> execute(Dataset<Row> ds) {
       Dataset<Row> d = ds.select(functions.explode(functions.split(ds.col("value"), "\\s+")));
        publish(d.toDF(), "component");
        return d;
    }
}

public class C extends D implements Serializable {

    public Dataset<Row> execute(Dataset<Row> ds) {

        publish(inputDataSet.toDF(), "console");
        ds.writeStream().format("csv").option("path", "hdfs://hostname:9000/user/abc/data1/")
                .option("checkpointLocation", "hdfs://hostname:9000/user/abc/cp").outputMode("append").start();
        return ds;
    }

}

public class D {

    public void publish(Dataset<Row> dataset, String directory) {
        dataset.writeStream().format("csv").option("path", "hdfs://hostname:9000/user/abc/" + directory)
                .option("checkpointLocation", "hdfs://hostname:9000/user/abc/checkpoint/" + directory).outputMode("append")
                .start();

    }
}

public static void main(String[] args) {

    SparkSession session = createSession();
    try {
        A a = new A();
        Dataset<Row> records = a.getDataSet(session);

        B b = new B();
        Dataset<Row> ds = b.execute(records);

        C c = new C();
        c.execute(ds);
        session.streams().awaitAnyTermination();
    } catch (StreamingQueryException e) {
        e.printStackTrace();
    }
}

问题是由于您正在从中读取输入套接字。Spark套接字源打开了两个与nc的连接(即,因为您有两个启动)。 nc的局限性是它只能将数据馈送到一个连接。对于其他输入源,您的查询应该可以正常运行。 查看相关问题: 在Spark结构化流中执行单独的流查询

尝试了如下所示的简单测试,并打印了两个输出:

 val df1 = spark.readStream.format("socket").option("host","localhost").option("port",5430).load()

  val df9 = spark.readStream.format("socket").option("host","localhost").option("port",5431).load()


  val df2 = df1.as[String].flatMap(x=>x.split(","))

  val df3 = df9.as[String].flatMap(x=>x.split(",")).select($"value".as("name"))

 val sq1 = df3.writeStream.format("console").queryName("sq1")
    .option("truncate","false").trigger(Trigger.ProcessingTime(10 second)).start()

  val sq = df2.writeStream.format("console").queryName("sq")
    .option("truncate","false").trigger(Trigger.ProcessingTime(20 second)).start()

spark.streams.awaitAnyTermination()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM