简体   繁体   中英

Insert missing elements in list as Rows per Time-Window group to DataFrame

Trying to figure this out pro-grammatically ... seems like a difficult problem ... basically if sensor item is not captured in time-series timestamp interval source data then want to append a row for each missing sensor item with a NULL value per timestamp window

# list of sensor items [have 300 plus; only showing 4 as example]
list = ["temp", "pressure", "vacuum", "burner"]

# sample data
df = spark.createDataFrame([('2019-05-10 7:30:05', 'temp', '99'),\
                            ('2019-05-10 7:30:05', 'burner', 'TRUE'),\
                            ('2019-05-10 7:30:10', 'vacuum', '.15'),\
                            ('2019-05-10 7:30:10', 'burner', 'FALSE'),\
                            ('2019-05-10 7:30:10', 'temp', '75'),\
                            ('2019-05-10 7:30:15', 'temp', '77'),\
                            ('2019-05-10 7:30:20', 'pressure', '.22'),\
                            ('2019-05-10 7:30:20', 'temp', '101'),], ["date", "item", "value"])
# current dilemma => all sensor items are not being captured / only updates to sensors are being captured in current back-end design streaming devices
+------------------+--------+-----+
|              date|    item|value|
+------------------+--------+-----+
|2019-05-10 7:30:05|    temp|   99|
|2019-05-10 7:30:05|  burner| TRUE|

|2019-05-10 7:30:10|  vacuum|  .15|
|2019-05-10 7:30:10|  burner|FALSE|
|2019-05-10 7:30:10|    temp|   75|

|2019-05-10 7:30:15|    temp|   77|

|2019-05-10 7:30:20|pressure|  .22|
|2019-05-10 7:30:20|    temp|  101|
+------------------+--------+-----+

Want to capture every sensor item per timestamp so forward filling imputing can performed prior to pivoting data-frame [forward filling on 300 plus cols is causing scala errors =>

Spark Caused by: java.lang.StackOverflowError Window Function?

# desired output
+------------------+--------+-----+
|              date|    item|value|
+------------------+--------+-----+
|2019-05-10 7:30:05|    temp|   99|
|2019-05-10 7:30:05|  burner| TRUE|
|2019-05-10 7:30:05|  vacuum| NULL|
|2019-05-10 7:30:05|pressure| NULL|

|2019-05-10 7:30:10|  vacuum|  .15|
|2019-05-10 7:30:10|  burner|FALSE|
|2019-05-10 7:30:10|    temp|   75|
|2019-05-10 7:30:10|pressure| NULL|

|2019-05-10 7:30:15|    temp|   77|
|2019-05-10 7:30:15|pressure| NULL|
|2019-05-10 7:30:15|  burner| NULL|
|2019-05-10 7:30:15|  vacuum| NULL|

|2019-05-10 7:30:20|pressure|  .22|
|2019-05-10 7:30:20|    temp|  101|
|2019-05-10 7:30:20|  vacuum| NULL|
|2019-05-10 7:30:20|  burner| NULL|
+------------------+--------+-----+

Expanding on my comment :

You can right join your DataFrame with the Cartesian product of the distinct dates and the sensor_list . Since the sensor_list is small, you can broadcast it.

from pyspark.sql.functions import broadcast

sensor_list = ["temp", "pressure", "vacuum", "burner"]

df.join(
    df.select('date')\
        .distinct()\
        .crossJoin(broadcast(spark.createDataFrame([(x,) for x in sensor_list], ["item"]))),
    on=["date", "item"],
    how="right"
).sort("date", "item").show()
#+------------------+--------+-----+
#|              date|    item|value|
#+------------------+--------+-----+
#|2019-05-10 7:30:05|  burner| TRUE|
#|2019-05-10 7:30:05|pressure| null|
#|2019-05-10 7:30:05|    temp|   99|
#|2019-05-10 7:30:05|  vacuum| null|
#|2019-05-10 7:30:10|  burner|FALSE|
#|2019-05-10 7:30:10|pressure| null|
#|2019-05-10 7:30:10|    temp|   75|
#|2019-05-10 7:30:10|  vacuum|  .15|
#|2019-05-10 7:30:15|  burner| null|
#|2019-05-10 7:30:15|pressure| null|
#|2019-05-10 7:30:15|    temp|   77|
#|2019-05-10 7:30:15|  vacuum| null|
#|2019-05-10 7:30:20|  burner| null|
#|2019-05-10 7:30:20|pressure|  .22|
#|2019-05-10 7:30:20|    temp|  101|
#|2019-05-10 7:30:20|  vacuum| null|
#+------------------+--------+-----+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM