简体   繁体   English

如何 select 基于日期时间列的 pySpark DataFrame 中的最后一个值

[英]How to select last value in a pySpark DataFrame based on a datetime column

I have a DataFrame df structured as follows:我有一个 DataFrame df 结构如下:

date_time           id   value
2020-12-06 17:00    A    10
2020-12-06 17:05    A    18
2020-12-06 17:00    B    20
2020-12-06 17:05    B    28
2020-12-06 17:00    C    30
2020-12-06 17:05    C    38

And I have to select only the most recent row for each id in a DataFrame named df_last.而且我必须 select 只有最近的一行 DataFrame 中名为 df_last 的每个 id。

This is a solution that works:这是一个有效的解决方案:

from pyspark.sql import functions as F
from pyspark.sql.window import *

df_rows = df.withColumn('row_num', F.row_number().over(Window.partitionBy('id').orderBy(F.desc('date_time')))-1)
df_last = df_rows.filter(F.col('row_num')==0)

I wonder if there is a simpler/cleaner solution我想知道是否有更简单/更清洁的解决方案

That's pretty much the way to do it.这几乎就是做到这一点的方法。 Just some minor improvements that can be made - no need to subtract 1 from the row number:只是可以进行一些小的改进——不需要从行号中减去 1:

from pyspark.sql import functions as F
from pyspark.sql.window import Window

df_rows = df.withColumn(
    'row_num', 
    F.row_number().over(Window.partitionBy('id').orderBy(F.desc('date_time')))
)
df_last = df_rows.filter('row_num = 1')

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM