[英]Mode of row as a new column in PySpark DataFrame
是否可以根據先前列的最大值添加新列,其中先前列是字符串文字。 考慮以下數據框:
df = spark.createDataFrame(
[
('1',25000,"black","black","white"),
('2',16000,"red","black","white"),
],
['ID','cash','colour_body','colour_head','colour_foot']
)
然后目標框架應如下所示:
df = spark.createDataFrame(
[
('1',25000,"black","black","white", "black" ),
('2',16000,"red","black","white", "white" ),
],
['ID','cash','colour_body','colour_head','colour_foot', 'max_v']
)
如果沒有最大可檢測值,則應使用最后一個有效顏色。
是否存在某種反可能性或udf?
在statistics.mode
周圍定義一個UDF,以使用所需的語義計算逐行模式:
import statistics
from pyspark.sql.functions import udf, col
from pyspark.sql.types import StringType
def mode(*x):
try:
return statistics.mode(x)
except statistics.StatisticsError:
return x[-1]
mode = udf(mode, StringType())
df.withColumn("max_v", mode(*[col(c) for c in df.columns if 'colour' in c])).show()
+---+-----+-----------+-----------+-----------+-----+
| ID| cash|colour_body|colour_head|colour_foot|max_v|
+---+-----+-----------+-----------+-----------+-----+
| 1|25000| black| black| white|black|
| 2|16000| red| black| white|white|
+---+-----+-----------+-----------+-----------+-----+
對於任何列數一般情況下, udf
通過@ cs95的解決方案是要走的路。
然而,在這種特定的情況下,你只有3列你其實可以簡化僅使用邏輯pyspark.sql.functions.when
,這將是比使用更高效的udf
。
from pyspark.sql.functions import col, when
def mode_of_3_cols(body, head, foot):
return(
when(
(body == head)|(body == foot),
body
).when(
(head == foot),
head
).otherwise(foot)
)
df.withColumn(
"max_v",
mode_of_3_cols(col("colour_body"), col("colour_head"), col("colour_foot"))
).show()
#+---+-----+-----------+-----------+-----------+-----+
#| ID| cash|colour_body|colour_head|colour_foot|max_v|
#+---+-----+-----------+-----------+-----------+-----+
#| 1|25000| black| black| white|black|
#| 2|16000| red| black| white|white|
#+---+-----+-----------+-----------+-----------+-----+
您只需要檢查兩列是否相等-如果是,則該值必須為mode。 如果不是,則返回最后一列。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.