简体   繁体   中英

How do I use a dataframe's data in creating a aggregated column then expanding rows using another dataframe in pyspark?

I have a data frame that gives me funding for various products at different levels. This is a wide data frame that shows funding from 2021-Jan-01 to 2021-Dec-31 ( Funding_Start_date and Funding_End_Date format yyyyMMdd )

funding_data = [
    (20210101,20211231,"Family","Cars","Audi","A4", 420.0, 12345, "Lump_Sum", 50000)
  ]

funding_schema = StructType([ \
    StructField("Funding_Start_Date",IntegerType(),True), \
    StructField("Funding_End_Date",IntegerType(),True), \
    StructField("Funding_Level",StringType(),True), \
    StructField("Type", StringType(), True), \
    StructField("Brand", StringType(), True), \
    StructField("Brand_Low", StringType(), True), \
    StructField("Family", FloatType(), True), \
    StructField("SKU_ID", IntegerType(), True), \
    StructField("Allocation_Basis", StringType(), True), \
    StructField("Amount", IntegerType(), True) \
  ])

funding_df = spark.createDataFrame(data=funding_data,schema=funding_schema)
funding_df.show()

+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+
|Funding_Start_Date|Funding_End_Date|Funding_Level|Type|Brand|Brand_Low|Family|SKU_ID|Allocation_Basis|Amount|
+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum|50000|
+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+

I want to have a row for each day of funding with a per-day Amount depending on the following factor:

a sale has been made on that day at that Funding_Level

I have a sales table at a Date and SKU level.

sales_data = [
    (20210105,352210,"Cars","Audi","A4", 420.0, 1),
    (20210106,352207,"Cars","Audi","A4", 420.0, 5),
    (20210106,352196,"Cars","Audi","A4", 420.0, 2),
    (20210109,352212,"Cars","Audi","A4", 420.0, 3),
    (20210112,352212,"Cars","Audi","A4", 420.0, 1),
    (20210112,352212,"Cars","Audi","A4", 420.0, 2),
    (20210112,352212,"Cars","BMW","X6", 325.0, 2),
    (20210126,352196,"Cars","Audi","A4", 420.0, 1),
    

  ]

sales_schema = StructType([ \
    StructField("DATE_ID",IntegerType(),True), \
    StructField("SKU_ID",IntegerType(),True), \
    StructField("Type",StringType(),True), \
    StructField("Brand", StringType(), True), \
    StructField("Brand_Low", StringType(), True), \
    StructField("Family", FloatType(), True),
    StructField("Quantity", IntegerType(), True)
  ])

sales_df = spark.createDataFrame(data=sales_data,schema=sales_schema)
sales_df.show()

+--------+------+----+-----+---------+------+--------+
| DATE_ID|SKU_ID|Type|Brand|Brand_Low|Family|Quantity|
+--------+------+----+-----+---------+------+--------+
|20210105|352210|Cars| Audi|       A4| 420.0|       1|
|20210106|352207|Cars| Audi|       A4| 420.0|       5|
|20210106|352196|Cars| Audi|       A4| 420.0|       2|
|20210109|352212|Cars| Audi|       A4| 420.0|       3|
|20210112|352212|Cars| Audi|       A4| 420.0|       1|
|20210112|352212|Cars| Audi|       A4| 420.0|       2|
|20210112|352212|Cars|  BMW|       X6| 325.0|       2|
|20210126|352196|Cars| Audi|       A4| 420.0|       1|
+--------+------+----+-----+---------+------+--------+

This would tell me there are 5 unique days when a Product with a column Family of 420.0 has been sold.

sales_df.filter(col('Family') == 420.0).select('DATE_ID').distinct().show()

+--------+
| DATE_ID|
+--------+
|20210112|
|20210109|
|20210105|
|20210106|
|20210126|
+--------+

So the Lumpsum/Day would be 50000/5 = 10000

So I'm trying to get a final data frame like this:

+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
| DATE_ID|Funding_Start_Date|Funding_End_Date|Funding_Level|Type|Brand|Brand_Low|Family|SKU_ID|Allocation_Basis|Amount|Lumpsum/Day|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
|20210105|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|      10000|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
|20210106|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|      10000|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
|20210109|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|      10000|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
|20210112|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|      10000|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+
|20210126|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|      10000|
+--------+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+-----------+

I've tried UDF's but I wasn't able to pass the sales_df in it to count the days and divide it by the Lump_Sum amount as UDF's don't accept data frames.

How do I get to this final data frame from the above two data frames?

To find the Lumpsum/Day per family and Funding_Start_Date and Funding_End_Date :

  1. Convert the Funding_Start_Date , Funding_End_Date and DATE_ID to DateType .
  2. Select distinct DATE_ID and Family from sales_df .
  3. Join funding_df and sales_df such that the DATE_ID is between Funding_Start_Date and Funding_End_Date and the Family are same.
  4. Apply count window aggregation over Funding_Start_Date , Funding_End_Date and Family to find number of days with sales.
  5. Divide Amount with result from step 4 to arrive at Lumpsum/Day .
from pyspark.sql.types import *
from pyspark.sql import functions as F
from pyspark.sql import Window

funding_data = [
    (20210101,20211231,"Family","Cars","Audi","A4", 420.0, 12345, "Lump_Sum", 50000)
  ]

funding_schema = StructType([ \
    StructField("Funding_Start_Date",IntegerType(),True), \
    StructField("Funding_End_Date",IntegerType(),True), \
    StructField("Funding_Level",StringType(),True), \
    StructField("Type", StringType(), True), \
    StructField("Brand", StringType(), True), \
    StructField("Brand_Low", StringType(), True), \
    StructField("Family", FloatType(), True), \
    StructField("SKU_ID", IntegerType(), True), \
    StructField("Allocation_Basis", StringType(), True), \
    StructField("Amount", IntegerType(), True) \
  ])

funding_df = spark.createDataFrame(data=funding_data,schema=funding_schema)

# STEP 1
funding_df = (funding_df.withColumn("Funding_Start_Date", F.to_date(F.col("Funding_Start_Date").cast("string"), "yyyyMMdd"))
                        .withColumn("Funding_End_Date", F.to_date(F.col("Funding_End_Date").cast("string"), "yyyyMMdd")))

sales_data = [
    (20210105,352210,"Cars","Audi","A4", 420.0, 1),
    (20210106,352207,"Cars","Audi","A4", 420.0, 5),
    (20210106,352196,"Cars","Audi","A4", 420.0, 2),
    (20210109,352212,"Cars","Audi","A4", 420.0, 3),
    (20210112,352212,"Cars","Audi","A4", 420.0, 1),
    (20210112,352212,"Cars","Audi","A4", 420.0, 2),
    (20210112,352212,"Cars","BMW","X6", 325.0, 2),
    (20210126,352196,"Cars","Audi","A4", 420.0, 1),
    

  ]

sales_schema = StructType([ \
    StructField("DATE_ID",IntegerType(),True), \
    StructField("SKU_ID",IntegerType(),True), \
    StructField("Type",StringType(),True), \
    StructField("Brand", StringType(), True), \
    StructField("Brand_Low", StringType(), True), \
    StructField("Family", FloatType(), True),
    StructField("Quantity", IntegerType(), True)
  ])

sales_df = spark.createDataFrame(data=sales_data,schema=sales_schema)

# STEP 1
sales_df = sales_df.withColumn("DATE_ID", F.to_date(F.col("DATE_ID").cast("string"), "yyyyMMdd"))

# STEP 2
sales_df = sales_df.select("DATE_ID", "Family").distinct()

# STEP 3
joined_df = funding_df.join(sales_df, (sales_df["DATE_ID"].between(funding_df["Funding_Start_Date"], funding_df["Funding_End_Date"]) & (funding_df["Family"] == sales_df["Family"])))
joined_df = joined_df.select(*[funding_df[c] for c in funding_df.columns], "DATE_ID")

# STEP 4 and 5
ws = Window.partitionBy("Funding_Start_Date", "Funding_End_Date", "Family").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)

(joined_df.withColumn("Lumpsum/Day", F.col("amount") / (F.count("DATE_ID").over(ws)))
          .withColumn("Funding_Start_Date", F.date_format("Funding_Start_Date", "yyyyMMdd").cast("int"))
          .withColumn("Funding_End_Date", F.date_format("Funding_End_Date", "yyyyMMdd").cast("int"))
          .withColumn("DATE_ID", F.date_format("DATE_ID", "yyyyMMdd").cast("int"))
).show()

Output

+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+--------+-----------+
|Funding_Start_Date|Funding_End_Date|Funding_Level|Type|Brand|Brand_Low|Family|SKU_ID|Allocation_Basis|Amount| DATE_ID|Lumpsum/Day|
+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+--------+-----------+
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|20210106|    10000.0|
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|20210112|    10000.0|
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|20210126|    10000.0|
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|20210105|    10000.0|
|          20210101|        20211231|       Family|Cars| Audi|       A4| 420.0| 12345|        Lump_Sum| 50000|20210109|    10000.0|
+------------------+----------------+-------------+----+-----+---------+------+------+----------------+------+--------+-----------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM