I have a Spark dataframe which shows (daily) how many times a product has been used. It looks like this:
| x_id | product | usage | yyyy_mm_dd | status |
|------|---------|-------|------------|--------|
| 10 | prod_go | 15 | 2020-10-10 | i |
| 10 | prod_rv | 7 | 2020-10-10 | fc |
| 10 | prod_mb | 0 | 2020-10-10 | n |
| 15 | prod_go | 0 | 2020-10-10 | n |
| 15 | prod_rv | 5 | 2020-10-10 | fc |
| 15 | prod_mb | 1 | 2020-10-10 | fc |
| 10 | prod_go | 20 | 2020-10-11 | i |
| 10 | prod_rv | 11 | 2020-10-11 | i |
| 10 | prod_mb | 3 | 2020-10-11 | fc |
| 15 | prod_go | 0 | 2020-10-11 | n |
| 15 | prod_rv | 5 | 2020-10-11 | fc |
| 15 | prod_mb | 1 | 2020-10-11 | fc |
The status column is based on usage
. When usage
is 0 then it will have n
. When usage
is between 1 and 9 and the status
will be fc. If usage
is >= 10 then the status
will be i.
I would like to introduce two additional columns to this Spark dataframe, date_reached_fc
and date_reached_i
. These columns should hold the min(yyyy_mm_dd)
when an x_id
reached each status respectively for a product
.
Based on the sample data, the output would look like this:
| x_id | product | usage | yyyy_mm_dd | status | date_reached_fc | date_reached_i |
|------|---------|-------|------------|--------|-----------------|----------------|
| 10 | prod_go | 15 | 2020-10-10 | i | null | 2020-10-10 |
| 10 | prod_rv | 7 | 2020-10-10 | fc | 2020-10-10 | null |
| 10 | prod_mb | 0 | 2020-10-10 | n | null | null |
| 15 | prod_go | 0 | 2020-10-10 | n | null | null |
| 15 | prod_rv | 5 | 2020-10-10 | fc | 2020-10-10 | null |
| 15 | prod_mb | 1 | 2020-10-10 | fc | 2020-10-10 | null |
| 10 | prod_go | 20 | 2020-10-11 | i | null | 2020-10-10 |
| 10 | prod_rv | 11 | 2020-10-11 | i | 2020-10-10 | 2020-10-11 |
| 10 | prod_mb | 3 | 2020-10-11 | fc | 2020-10-11 | null |
| 15 | prod_go | 0 | 2020-10-11 | n | null | null |
| 15 | prod_rv | 5 | 2020-10-11 | fc | 2020-10-10 | null |
| 15 | prod_mb | 1 | 2020-10-11 | fc | 2020-10-10 | null |
The ordering is a bit different from your question, but the results should be correct... Basically just use min
over a window, and also use when
to filter only the relevant dates.
from pyspark.sql import functions as F, Window
df2 = df.withColumn(
'date_reached_fc',
F.min(F.when(F.col('status') == 'fc', F.col('yyyy_mm_dd'))).over(Window.partitionBy('x_id', 'product').orderBy('yyyy_mm_dd', 'usage'))
).withColumn(
'date_reached_i',
F.min(F.when(F.col('status') == 'i', F.col('yyyy_mm_dd'))).over(Window.partitionBy('x_id', 'product').orderBy('yyyy_mm_dd', 'usage'))
).orderBy('x_id', 'product', 'yyyy_mm_dd', 'usage')
df2.show()
+----+-------+-----+----------+------+---------------+--------------+
|x_id|product|usage|yyyy_mm_dd|status|date_reached_fc|date_reached_i|
+----+-------+-----+----------+------+---------------+--------------+
| 10|prod_go| 15|2020-10-10| i| null| 2020-10-10|
| 10|prod_go| 20|2020-10-11| i| null| 2020-10-10|
| 10|prod_mb| 0|2020-10-10| n| null| null|
| 10|prod_mb| 3|2020-10-11| fc| 2020-10-11| null|
| 10|prod_rv| 7|2020-10-10| fc| 2020-10-10| null|
| 10|prod_rv| 11|2020-10-11| i| 2020-10-10| 2020-10-11|
| 15|prod_go| 0|2020-10-10| n| null| null|
| 15|prod_go| 0|2020-10-11| n| null| null|
| 15|prod_mb| 1|2020-10-10| fc| 2020-10-10| null|
| 15|prod_mb| 1|2020-10-11| fc| 2020-10-10| null|
| 15|prod_rv| 5|2020-10-10| fc| 2020-10-10| null|
| 15|prod_rv| 5|2020-10-11| fc| 2020-10-10| null|
+----+-------+-----+----------+------+---------------+--------------+
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.