繁体   English   中英

分组并在PySpark数据框中创建一个新列

[英]Groupby and create a new column in PySpark dataframe

我有一个像这样的pyspark数据框,

+----------+--------+
|id_       | p      |
+----------+--------+
|  1       | A      |
|  1       | B      |
|  1       | B      |
|  1       | A      |
|  1       | A      |
|  1       | B      |
|  2       | C      |
|  2       | C      |
|  2       | C      |
|  2       | A      |
|  2       | A      |
|  2       | C      |
---------------------

我想为每组id_创建另一列。 列是使用现在的熊猫代码编写的,

sample.groupby(by=['id_'], group_keys=False).apply(lambda grp : grp['p'].ne(grp['p'].shift()).cumsum())

如何在pyspark数据框中执行此操作?

目前,我正在借助运行速度非常慢的Pandas UDF进行此操作。

有哪些选择?

预期专栏将是这样,

1
2
2
3
3
4
1
1
1
2
2
3

您可以结合使用udf和window函数来获得结果:

# required imports
from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType

# define a window, which we will use to calculate lag values
w = Window().partitionBy().orderBy(F.col('id_'))

# define user defined function (udf) to perform calculation on each row
def f(lag_val, current_val):
    if lag_val != current_val:
       return 1
    return 0
# register udf so we can use with our dataframe
func_udf = F.udf(f, IntegerType())

# read csv file
df = spark.read.csv('/path/to/file.csv', header=True)

# create new column with lag on window we created earlier, apply udf on lagged
# and current value and then apply window function again to calculate cumsum
df.withColumn("new_column", func_udf(F.lag("p").over(w), df['p'])).withColumn('cumsum', F.sum('new_column').over(w.partitionBy(F.col('id_')).rowsBetween(Window.unboundedPreceding, 0))).show()

+---+---+----------+------+
|id_|  p|new_column|cumsum|
+---+---+----------+------+
|  1|  A|         1|     1|
|  1|  B|         1|     2|
|  1|  B|         0|     2|
|  1|  A|         1|     3|
|  1|  A|         0|     3|
|  1|  B|         1|     4|
|  2|  C|         1|     1|
|  2|  C|         0|     1|
|  2|  C|         0|     1|
|  2|  A|         1|     2|
|  2|  A|         0|     2|
|  2|  C|         1|     3|
+---+---+----------+------+

# where:
#  w.partitionBy : to partition by id_ column
#  w.rowsBetween : to specify frame boundaries
#  ref https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/expressions/Window.html#rowsBetween-long-long- 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM