简体   繁体   中英

pyspark lag function with inconsistent time series

import pyspark.sql.functions as F
from pyspark.sql.window import Window

I would like to use a window function to find the value from a column 4 periods ago.

Suppose my data (df) looks like this (in reality i have many different IDs):

ID | value | period

a  |  100  |   1   
a  |  200  |   2   
a  |  300  |   3   
a  |  400  |   5   
a  |  500  |   6   
a  |  600  |   7   

If the time series was consistent (eg period 1-6) I could just use F.lag(df['value'], count=4).over(Window.partitionBy('id').orderBy('period'))

However, because the time series has a discontinuity, the values would be displaced.

My desired output would be this:

ID | value | period | 4_lag_value
a  |  100  |   1    |     nan
a  |  200  |   2    |     nan 
a  |  300  |   3    |     nan
a  |  400  |   5    |     100
a  |  500  |   6    |     200
a  |  600  |   7    |     300

How can I do this in pyspark?

This is probably what you're looking for:

from pyspark.sql import Window, functions as F

def pyspark_timed_lag_values(df, lags, avg_diff, state_id='state_id', ds='ds', y='y'):

    interval_expr = 'sequence(min_ds, max_ds, interval {0} day)'.format(avg_diff)
    all_comb = (df.groupBy(F.col(state_id))
                .agg(F.min(ds).alias('min_ds'), F.max(ds).alias('max_ds'))
                .withColumn(ds, F.explode(F.expr(interval_expr)))
                .select(*[state_id, ds]))

    all_comb = all_comb.join(df.withColumn('exists', F.lit(True)), on=[state_id, ds], how='left')
    window = Window.partitionBy(state_id).orderBy(F.col(ds).asc())
    for lag in lags:
        all_comb = all_comb.withColumn("{0}_{1}".format(y, lag), F.lag(y, lag).over(window))

    all_comb = all_comb.filter(F.col('exists')).drop(*['exists'])
    return all_comb

    

Let's apply it on an example:

data = spark.sparkContext.parallelize([
        (1,"2021-01-03",100),
        (1,"2021-01-10",830),
        (1,"2021-01-17",300),
        (1,"2021-02-07",450),
        (2,"2021-01-03",500),
        (2,"2021-01-17",800),
        (2,"2021-02-14",800)])


example = spark.createDataFrame(data, ['state_id','ds','y'])
example = example.withColumn('ds', F.to_date(F.col('ds')))

lags = list(range(1, n_periods + 1))
result = timed_lag_values(example, lags = lags, avg_diff = 7)

Resulting in the following result:

+--------+----------+---+----+----+----+----+----+----+----+
|state_id|        ds|  y| y_1| y_2| y_3| y_4| y_5| y_6| y_7|
+--------+----------+---+----+----+----+----+----+----+----+
|       1|2021-01-03|100|null|null|null|null|null|null|null|
|       1|2021-01-10|830| 100|null|null|null|null|null|null|
|       1|2021-01-17|300| 830| 100|null|null|null|null|null|
|       1|2021-02-07|450|null|null| 300| 830| 100|null|null|
|       2|2021-01-03|500|null|null|null|null|null|null|null|
|       2|2021-01-17|800|null| 500|null|null|null|null|null|
|       2|2021-02-14|800|null|null|null| 800|null| 500|null|
+--------+----------+---+----+----+----+----+----+----+----+

Right now the it's prepared for dates, but with a small adaptation it should be applicable to various use cases. In this case the disadvantage is the necessary application of explode to create all possible date combinations and the creation of the helper DataFrame all_comb .

The real benefit of this solution is it's applicable to most use cases dealing with time-series, as the parameter avg_diff defines the expected distance between time-periods.

Just to mention, there is probably a cleaner Hive SQL alternative to this.

I've come up with a solution, but it seems unnecessarily ugly, would welcome anything better!

data = spark.sparkContext.parallelize([
        ('a',100,1),
        ('a',200,2),
        ('a',300,3),
        ('a',400,5),
        ('a',500,6),
        ('a',600,7)])

df = spark.createDataFrame(data, ['id','value','period'])

window = Window.partitionBy('id').orderBy('period')

# look 1, 2, 3 and 4 rows behind:
for diff in [1,2,3,4]:
    df = df.withColumn('{}_diff'.format(diff),
                       df['period'] - F.lag(df['period'], count=diff).over(window))

# if any of these are 4, that's the lag we need
# if not, there is no 4 period lagged return, so return None

#initialise col
df = df.withColumn('4_lag_value', F.lit(None))
# loop:
for diff in [1,2,3,4]:
    df = df.withColumn('4_lag_value',
                       F.when(df['{}_diff'.format(diff)] == 4,
                                 F.lag(df['value'], count=diff).over(window))
                              .otherwise(df['4_lag_value']))

# drop working cols
df = df.drop(*['{}_diff'.format(diff) for diff in [1,2,3,4]])

This returns the desired output.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM