简体   繁体   English

如何在不分组的情况下在 pyspark dataframe 中添加具有最大值的常量列

[英]How to add a constant column with maximum value in a pyspark dataframe without grouping by

Suppose that we have a PySpark dataframe with two columns, ID (it is unique) and VALUE.假设我们有一个 PySpark dataframe 有两列,ID(它是唯一的)和 VALUE。

I need to add a third column that contains always the same value, ie the maximum value of the column VALUE.我需要添加始终包含相同值的第三列,即列 VALUE 的最大值。 I observe that in this case it doesn't make any sense to group by the ID because I need a global maximum.我观察到在这种情况下,按 ID 分组没有任何意义,因为我需要一个全局最大值。

It's sound very simple and probably it is, but I only saw solutions involving grouping by that do not fit my case.这听起来很简单,可能确实如此,但我只看到涉及分组的解决方案不适合我的情况。 I tried a lot of things but nothing worked.我尝试了很多东西,但没有任何效果。

I need a solution only in PySpark/Python Code.我只需要 PySpark/Python 代码中的解决方案。 Thanks a lot!非常感谢!

You can do this:你可以这样做:

from pyspark.sql.functions import max, lit
# compute max value from VALUE column
max_df = df.select(max(df['VALUE'])).collect()
# max_df is a 1 row 1 column dataframe, you need to extract the value
max_val = max_df[0][0]
# create new column in df, you need lit as you have a constant value
df = df.withColumn('newcol',lit(max_val))

In your case you can use window functions.在您的情况下,您可以使用 window 函数。 And I presume your value column contains list of values.而且我认为您的值列包含值列表。

from pyspark.sql.functions import max,desc

from pyspark.sql.window import Window

spec= Window.partitionBy('ID').orderBy(desc('VALUE'))

newDF = df.withColumn('maxValue',max('VALUE').over(spec))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM