I want to create a new column (number) for each partition in PySpark DataFrame which gets incremented when a change occurs in the column year.
Original data:
name period year
A 1 2010
A 1 2010
A 1 2011
A 1 2013
B 1 2018
B 1 2019
C 2 2018
C 2 2018
C 2 2019
Expected Output:
name period year number
A 1 2010 1
A 1 2010 1
A 1 2011 2
A 1 2013 3
B 1 2018 1
B 1 2019 2
C 2 2018 1
C 2 2018 1
C 2 2019 2
Creating the sample dataframe you provided:
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql.window import Window
data = [{"name":'A', "period":1, "year":2010},
{"name":'A', "period":1, "year":2010},
{"name":'A', "period":1, "year":2011},
{"name":'A', "period":1, "year":2013},
{"name":'B', "period":1, "year":2018},
{"name":'B', "period":1, "year":2019},
{"name":'C', "period":2, "year":2018},
{"name":'C', "period":2, "year":2018},
{"name":'C', "period":2, "year":2019}]
df = spark.createDataFrame(data)
Using the window function to partition the dataframe and then dense_rank based on that partitioning:
window = (Window.partitionBy('name').orderBy(F.col('year').asc()))
df = df.withColumn('number', F.dense_rank().over(window)).orderBy("name", "year")
Result:
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.