簡體   English   中英

分組並在PySpark數據框中創建一個新列

[英]Groupby and create a new column in PySpark dataframe

我有一個像這樣的pyspark數據框,

+----------+--------+
|id_       | p      |
+----------+--------+
|  1       | A      |
|  1       | B      |
|  1       | B      |
|  1       | A      |
|  1       | A      |
|  1       | B      |
|  2       | C      |
|  2       | C      |
|  2       | C      |
|  2       | A      |
|  2       | A      |
|  2       | C      |
---------------------

我想為每組id_創建另一列。 列是使用現在的熊貓代碼編寫的,

sample.groupby(by=['id_'], group_keys=False).apply(lambda grp : grp['p'].ne(grp['p'].shift()).cumsum())

如何在pyspark數據框中執行此操作?

目前,我正在借助運行速度非常慢的Pandas UDF進行此操作。

有哪些選擇?

預期專欄將是這樣,

1
2
2
3
3
4
1
1
1
2
2
3

您可以結合使用udf和window函數來獲得結果:

# required imports
from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType

# define a window, which we will use to calculate lag values
w = Window().partitionBy().orderBy(F.col('id_'))

# define user defined function (udf) to perform calculation on each row
def f(lag_val, current_val):
    if lag_val != current_val:
       return 1
    return 0
# register udf so we can use with our dataframe
func_udf = F.udf(f, IntegerType())

# read csv file
df = spark.read.csv('/path/to/file.csv', header=True)

# create new column with lag on window we created earlier, apply udf on lagged
# and current value and then apply window function again to calculate cumsum
df.withColumn("new_column", func_udf(F.lag("p").over(w), df['p'])).withColumn('cumsum', F.sum('new_column').over(w.partitionBy(F.col('id_')).rowsBetween(Window.unboundedPreceding, 0))).show()

+---+---+----------+------+
|id_|  p|new_column|cumsum|
+---+---+----------+------+
|  1|  A|         1|     1|
|  1|  B|         1|     2|
|  1|  B|         0|     2|
|  1|  A|         1|     3|
|  1|  A|         0|     3|
|  1|  B|         1|     4|
|  2|  C|         1|     1|
|  2|  C|         0|     1|
|  2|  C|         0|     1|
|  2|  A|         1|     2|
|  2|  A|         0|     2|
|  2|  C|         1|     3|
+---+---+----------+------+

# where:
#  w.partitionBy : to partition by id_ column
#  w.rowsBetween : to specify frame boundaries
#  ref https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/expressions/Window.html#rowsBetween-long-long- 

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM