简体   繁体   English

如何使用 Pyspark 在列表中获取唯一组合?

[英]How to get unique combinations in a list using Pyspark?

I am using below python code to get unique combinations out of the list.我正在使用下面的 python 代码从列表中获取唯一组合。

import itertools
unique_combinations = [] 
ss = [['0_20F','1_20F','2_20F','3_20F','4_20F','5_20F','6_20F','7_20F','8_20F','9_20F','10_20F'],
      ['0_40F','1_40F','2_40F','3_40F','4_40F','5_40F','6_40F','7_40F','8_40F', '9_40F','10_40F'],
      ['0_40HC','1_40HC','2_40HC','3_40HC','4_40HC','5_40HC','6_40HC','7_40HC','8_40HC', '9_40HC','10_40HC']]

for l in list(itertools.product(*ss)):
    unique_combinations.append(l)
    print(l)

The sample output is as follows.样品 output 如下。

('0_20F', '0_40F', '0_40HC')
('0_20F', '0_40F', '1_40HC')
('0_20F', '0_40F', '2_40HC')
('0_20F', '0_40F', '3_40HC')
('0_20F', '0_40F', '4_40HC')
('0_20F', '0_40F', '5_40HC')
('0_20F', '0_40F', '6_40HC')
('0_20F', '0_40F', '7_40HC')
('0_20F', '0_40F', '8_40HC')
('0_20F', '0_40F', '9_40HC')

I need to get this done using pyspark.Is it possible via pyspark?.我需要使用 pyspark 来完成这项工作。是否可以通过 pyspark 来完成?

For each list in ss a new dataframe can be created.对于ss中的每个列表,可以创建一个新的 dataframe。 After that all dataframes can be cross joined :之后,所有数据帧都可以交叉连接

dfs = [spark.createDataFrame([[s] for s in ssx], schema=[f"col_{i}"]) 
    for i, ssx in enumerate(ss)]

import functools
result = functools.reduce(lambda l,r: l.crossJoin(r), dfs )
result.show(5)

#+-----+-----+------+
#|col_0|col_1| col_2|
#+-----+-----+------+
#|0_20F|0_40F|0_40HC|
#|0_20F|0_40F|1_40HC|
#|0_20F|1_40F|0_40HC|
#|0_20F|1_40F|1_40HC|
#|1_20F|0_40F|0_40HC|
#+-----+-----+------+
#only showing top 5 rows

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM