[英]Derived column in pySpark using two columns and previous row's value
I would like to create a column on my spark dataframe with operations on two columns. 我想在我的spark数据框上创建一列,并对两列进行操作。
I want to create the column Areas
which is calculated with the formula: 我想创建用以下公式计算的“ Areas
”列:
( (Pct_Buenos_Acum[i]-Pct_Buenos_Acum[i-1]) * (Pct_Malos_Acum[i]+Pct_Malos_Acum[i-1]) ) / 2
I have tried this: 我已经试过了:
w = Window.rowsBetween(Window.unboundedPreceding, Window.currentRow)
df= df.withColumn('Areas', (( ( col('Pct_Acum_buenos')-col('Pct_Acum_buenos' ) )*(col('Pct_Acum_malos')+col('Pct_Acum_malos')))/2).over(w))
Here is a way to access previous values in pySpark. 这是一种访问pySpark中先前值的方法。 Going by that. 顺其自然。
from pyspark.sql import functions as F
# adding indexs column to use in order by
df = df.withColumn('index', F.monotonicallyIncreasingId)
w = Window.partitionBy().orderBy('index')
df = df.withColumn('Areas', (((col('Pct_Acum_buenos')-F.lag(col('Pct_Acum_buenos')).over(w))*(col('Pct_Acum_malos')+F.lag(col('Pct_Acum_malos')).over(w)))/2)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.