簡體   English   中英

如何基於列擴展 Pyspark 數據框?

[英]How to expand out a Pyspark dataframe based on column?

如何根據列值擴展數據框? 我打算從這個數據框出發:

+---------+----------+----------+
|DEVICE_ID|  MIN_DATE|  MAX_DATE|
+---------+----------+----------+
|        1|2019-08-29|2019-08-31|
|        2|2019-08-27|2019-09-02|
+---------+----------+----------+

對於一個看起來像這樣的人:

+---------+----------+
|DEVICE_ID|      DATE|
+---------+----------+
|        1|2019-08-29|
|        1|2019-08-30|
|        1|2019-08-31|
|        2|2019-08-27|
|        2|2019-08-28|
|        2|2019-08-29|
|        2|2019-08-30|
|        2|2019-08-31|
|        2|2019-09-01|
|        2|2019-09-02|
+---------+----------+

任何幫助將非常感激。

from datetime import timedelta, date
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType

# Create a sample data row.
df = sqlContext.sql("""
select 'dev1' as device_id, 
to_date('2020-01-06') as start, 
to_date('2020-01-09') as end""")

# Define a UDf to return a list of dates
@udf
def datelist(start, end):
    return ",".join([str(start + datetime.timedelta(days=x)) for x in range(0, 1+(end-start).days)])

# explode the list of dates into rows
df.select("device_id", 
          F.explode(
              F.split(datelist(df["start"], df["end"]), ","))
          .alias("date")).show(10, False)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM