简体   繁体   English

根据 pyspark 中的 groupby 过滤行创建具有最大值的新列

[英]Create new column with max value based on filtered rows with groupby in pyspark

I have a spark dataframe我有spark dataframe

import pandas as pd
foo = pd.DataFrame({'id': [1,1,2,2,2], 'col': ['a','b','a','a','b'], 'value': [1,5,2,3,4],
'col_b': ['a','c','a','a','c']})

I want to create a new column with the max of the value column, groupped by id .我想用value列的max创建一个新列,按id分组。 But I want the max value only for the rows that col==col_b但我只想要col==col_b value行的max

My result spark dataframe should look like this我的结果火花 dataframe 应该看起来像这样

foo = pd.DataFrame({'id': [1,1,2,2,2], 'col': ['a','b','a','a','b'], 'value': [1,5,2,3,4],
'max_value':[1,1,3,3,3], 'col_b': ['a','c','a','a','c']})

I have tried我努力了

from pyspark.sql import functions as f
from pyspark.sql.window import Window
w = Window.partitionBy('id')
foo = foo.withColumn('max_value', f.max('value').over(w))\
    .where(f.col('col') == f.col('col_b'))

But I end up losing some rows.但我最终失去了一些行。

Any ideas?有任何想法吗?

Use when function for conditionnal aggregation max : when function 用于条件聚合max时使用:

from pyspark.sql import Window
from pyspark.sql import functions as F

w = Window.partitionBy('id')

foo = foo.withColumn('max_value', F.max(F.when(F.col('col') == F.col('col_b'), F.col('value'))).over(w))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM