[英]PySpark: Get top k column for each row in dataframe
我有一個數據框,每個聯系人的每個優惠都有分數。 我想創建一個新的數據幀,其中包含每個聯系人的前3個優惠。
輸入數據框是這樣的:
=======================================================================
| contact | offer 1 | offer 2 | offer 3 | offer 4 | offer 5 | offer 6 |
=======================================================================
| name 1 | 0 | 3 | 1 | 2 | 1 | 6 |
-----------------------------------------------------------------------
| name 2 | 1 | 7 | 2 | 9 | 5 | 3 |
-----------------------------------------------------------------------
我想將它轉換為數據幀,如下所示:
===============================================================
| contact | best offer | second best offer | third best offer |
===============================================================
| name 1 | offer 6 | offer 2 | offer 4 |
---------------------------------------------------------------
| name 1 | offer 4 | offer 2 | offer 5 |
---------------------------------------------------------------
你需要一些進口:
from pyspark.sql.functions import array, col, lit, sort_array, struct
使用問題中顯示的數據:
df = sc.parallelize([
("name 1", 0, 3, 1, 2, 1, 6),
("name 2", 1, 7, 2, 9, 5, 3),
]).toDF(["contact"] + ["offer_{}".format(i) for i in range(1, 7)])
你可以組裝和排序一組structs
:
offers = sort_array(array(*[
struct(col(c).alias("v"), lit(c).alias("k")) for c in df.columns[1:]
]), asc=False)
並select
:
df.select(
["contact"] + [offers[i]["k"].alias("_{}".format(i)) for i in [0, 1, 2]])
這應該給出以下結果:
+-------+-------+-------+-------+
|contact| _0| _1| _2|
+-------+-------+-------+-------+
| name 1|offer_6|offer_2|offer_4|
| name 2|offer_4|offer_2|offer_5|
+-------+-------+-------+-------+
根據您的需要重新命名列,您就可以開始了。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.