繁体   English   中英

PySpark:如何对空值进行分组,重采样和前填充?

[英]PySpark: how to groupby, resample and forward-fill null values?

考虑以下Spark中的数据集,我想以特定频率(例如5分钟)对日期进行重新采样。

START_DATE = dt.datetime(2019,8,15,20,33,0)
test_df = pd.DataFrame({
    'school_id': ['remote','remote','remote','remote','onsite','onsite','onsite','onsite','remote','remote'],
    'class_id': ['green', 'green', 'red', 'red', 'green', 'green', 'green', 'green', 'red', 'green'],
    'user_id': [15,15,16,16,15,17,17,17,16,17],
    'status': [0,1,1,1,0,1,0,1,1,0],
    'start': pd.date_range(start=START_DATE, periods=10, freq='2min')
})

test_df.groupby(['school_id', 'class_id', 'user_id', 'start']).min()

但是我也希望在两个特定的日期范围之间进行重新采样: 2019-08-15 20:30:002019-08-15 21:00:00 因此,每组school_idclass_iduser_id将有6个条目,两个日期范围之间每5分钟存储一次。 重采样生成的null条目应通过正向填充来填充。

我已经将Pandas用于示例数据集,但是实际的数据帧将在Spark中提取,因此我正在寻找的方法也应在Spark中完成。

我猜这种方法可能与此PySpark类似:如何对频率重新采样,但在这种情况下我无法使其工作。

谢谢你的帮助

这可能不是获得最终结果的最佳方法,而只是想在这里展示想法。

  1. 首先,创建DataFrame并将时间戳转换为整数
from datetime import datetime
import pytz
from pytz import timezone

# Create DataFrame
START_DATE = datetime(2019,8,15,20,33,0)
test_df = pd.DataFrame({
    'school_id': ['remote','remote','remote','remote','onsite','onsite','onsite','onsite','remote','remote'],
    'class_id': ['green', 'green', 'red', 'red', 'green', 'green', 'green', 'green', 'red', 'green'],
    'user_id': [15,15,16,16,15,17,17,17,16,17],
    'status': [0,1,1,1,0,1,0,1,1,0],
    'start': pd.date_range(start=START_DATE, periods=10, freq='2min')
})

# Convert TimeStamp to Integers
df = spark.createDataFrame(test_df)
print(df.dtypes)
df = df.withColumn('start', F.col('start').cast("bigint"))
df.show()

输出:

+---------+--------+-------+------+----------+
|school_id|class_id|user_id|status|     start|
+---------+--------+-------+------+----------+
|   remote|   green|     15|     0|1565915580|
|   remote|   green|     15|     1|1565915700|
|   remote|     red|     16|     1|1565915820|
|   remote|     red|     16|     1|1565915940|
|   onsite|   green|     15|     0|1565916060|
|   onsite|   green|     17|     1|1565916180|
|   onsite|   green|     17|     0|1565916300|
|   onsite|   green|     17|     1|1565916420|
|   remote|     red|     16|     1|1565916540|
|   remote|   green|     17|     0|1565916660|
+---------+--------+-------+------+----------+
  1. 创建所需的时间序列
# Create time sequece needed
start = datetime.strptime('2019-08-15 20:30:00', '%Y-%m-%d %H:%M:%S')
eastern = timezone('US/Eastern')
start = eastern.localize(start)
times = pd.date_range(start = start, periods = 6, freq='5min')
times = [s.timestamp() for s in times]
print(times)
[1565915400.0, 1565915700.0, 1565916000.0, 1565916300.0, 1565916600.0, 1565916900.0]
  1. 最后,为每个组创建数据框
# Use pandas_udf to create final DataFrame
schm = StructType(df.schema.fields + [StructField('epoch', IntegerType(), True)])
@pandas_udf(schm, PandasUDFType.GROUPED_MAP)
def resample(pdf):
    pddf = pd.DataFrame({'epoch':times})
    pddf['school_id'] = pdf['school_id'][0]
    pddf['class_id'] = pdf['class_id'][0]
    pddf['user_id'] = pdf['user_id'][0]


    res = np.searchsorted(times, pdf['start'])
    arr = np.zeros(len(times))
    arr[:] = np.nan
    arr[res] = pdf['start']
    pddf['status'] = arr

    arr[:] = np.nan
    arr[res] = pdf['status']
    pddf['start'] = arr
    return pddf

df = df.groupBy('school_id', 'class_id', 'user_id').apply(resample)
df = df.withColumn('timestamp', F.to_timestamp(df['epoch']))
df.show(60)

最终结果:

+---------+--------+-------+----------+-----+----------+-------------------+
|school_id|class_id|user_id|    status|start|     epoch|          timestamp|
+---------+--------+-------+----------+-----+----------+-------------------+
|   remote|     red|     16|      null| null|1565915400|2019-08-15 20:30:00|
|   remote|     red|     16|      null| null|1565915700|2019-08-15 20:35:00|
|   remote|     red|     16|1565915940|    1|1565916000|2019-08-15 20:40:00|
|   remote|     red|     16|      null| null|1565916300|2019-08-15 20:45:00|
|   remote|     red|     16|1565916540|    1|1565916600|2019-08-15 20:50:00|
|   remote|     red|     16|      null| null|1565916900|2019-08-15 20:55:00|
|   onsite|   green|     15|      null| null|1565915400|2019-08-15 20:30:00|
|   onsite|   green|     15|      null| null|1565915700|2019-08-15 20:35:00|
|   onsite|   green|     15|      null| null|1565916000|2019-08-15 20:40:00|
|   onsite|   green|     15|1565916060|    0|1565916300|2019-08-15 20:45:00|
|   onsite|   green|     15|      null| null|1565916600|2019-08-15 20:50:00|
|   onsite|   green|     15|      null| null|1565916900|2019-08-15 20:55:00|
|   remote|   green|     17|      null| null|1565915400|2019-08-15 20:30:00|
|   remote|   green|     17|      null| null|1565915700|2019-08-15 20:35:00|
|   remote|   green|     17|      null| null|1565916000|2019-08-15 20:40:00|
|   remote|   green|     17|      null| null|1565916300|2019-08-15 20:45:00|
|   remote|   green|     17|      null| null|1565916600|2019-08-15 20:50:00|
|   remote|   green|     17|1565916660|    0|1565916900|2019-08-15 20:55:00|
|   onsite|   green|     17|      null| null|1565915400|2019-08-15 20:30:00|
|   onsite|   green|     17|      null| null|1565915700|2019-08-15 20:35:00|
|   onsite|   green|     17|      null| null|1565916000|2019-08-15 20:40:00|
|   onsite|   green|     17|1565916180|    1|1565916300|2019-08-15 20:45:00|
|   onsite|   green|     17|1565916420|    1|1565916600|2019-08-15 20:50:00|
|   onsite|   green|     17|      null| null|1565916900|2019-08-15 20:55:00|
|   remote|   green|     15|      null| null|1565915400|2019-08-15 20:30:00|
|   remote|   green|     15|1565915580|    0|1565915700|2019-08-15 20:35:00|
|   remote|   green|     15|      null| null|1565916000|2019-08-15 20:40:00|
|   remote|   green|     15|      null| null|1565916300|2019-08-15 20:45:00|
|   remote|   green|     15|      null| null|1565916600|2019-08-15 20:50:00|
|   remote|   green|     15|      null| null|1565916900|2019-08-15 20:55:00|
+---------+--------+-------+----------+-----+----------+-------------------+

现在,每个组都有6个时间戳。 注意,并非所有原始的“状态”和“开始”都映射到最终的DataFrame,这是因为在resample 5minute ,它发生的间隔为5 5minute ,两次“开始”时间可以映射到同一时间网格点,因此您丢失了一个这里。 可以根据您的频率以及如何保存数据在udf进行调整。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM