[英]Join dataframe with order by desc limit on spark /java
我正在使用以下代码:
Dataset <Row> dataframee = df1.as("a").join(df2.as("b"),
df2.col("id_device").equalTo(df1.col("ID_device_previous")).
and(df2.col("id_vehicule").equalTo(df1.col("ID_vehicule_previous"))).
and(df2.col("tracking_time").lt(df1.col("date_track_previous")))
,"left").selectExpr("a.*", "b.ID_tracking as ID_pprevious", "b.km as KM_pprevious","b.tracking_time as tracking_time_pprevious","b.speed as speed_pprevious");
我从 df2 数据帧中获得多行的 df1 数据帧连接。
但我想要的是在相同的条件下将df1
数据帧与df2
数据帧连接起来, df2.col("tracking_time") desc limit(0,1)
排序
编辑
我尝试了以下代码,但它不起作用。
df1.registerTempTable("data");
df2.createOrReplaceTempView("tdays");
Dataset<Row> d_f = sparkSession.sql("select a.* from data as a LEFT JOIN (select b.tracking_time from tdays as b where b.id_device = a.ID_device_previous and b.id_vehicule = a.ID_vehicule_previous and b.tracking_time < a.date_track_previous order by b.tracking_time desc limit 1 )");
我需要你的帮助
你可以用我知道的多种方式来做到这一点
您可以在加入的数据框 DF 上执行 dropDuplicates。
val finalDF = dataframee.dropDuplicates("") // 您希望在最终输出中不同/唯一的指定列
(或者)
spark-sql
import spark.sql.implicits._ df1.createOrReplaceTempViews("table1") df2.createOrReplaceTempViews("table2") spark.sql("join query with groupBy distinct columns").select(df("*"))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.