簡體   English   中英

在PySpark中加入多個數據框

[英]Join multiple data frame in PySpark

我有以下幾個數據框,每個數據框有兩列,並具有完全相同的行數。 我如何加入它們以便獲得一個數據幀,其中包含兩列和來自兩個數據幀的所有行?

例如:

數據幀1

+--------------+-------------+
| colS         |  label      |
+--------------+-------------+
| sample_0_URI |  0          |
| sample_0_URI |  0          |
+--------------+-------------+

數據幀2

+--------------+-------------+
| colS         |  label      |
+--------------+-------------+
| sample_1_URI |  1          |
| sample_1_URI |  1          |
+--------------+-------------+

數據幀3

+--------------+-------------+
| col1         |  label      |
+--------------+-------------+
| sample_2_URI |  2          |
| sample_2_URI |  2          |
+--------------+-------------+

數據幀4

+--------------+-------------+
| col1         |  label      |
+--------------+-------------+
| sample_3_URI |  3          |
| sample_3_URI |  3          |
+--------------+-------------+

...

我希望連接的結果是:

+--------------+-------------+
| col1         |  label      |
+--------------+-------------+
| sample_0_URI |  0          |
| sample_0_URI |  0          |
| sample_1_URI |  1          |
| sample_1_URI |  1          |
| sample_2_URI |  2          |
| sample_2_URI |  2          |
| sample_3_URI |  3          |
| sample_3_URI |  3          |
+--------------+-------------+

現在,如果我想對標簽列進行單熱編碼,那么應該是這樣的:

oe = OneHotEncoder(inputCol="label",outputCol="one_hot_label")
df = oe.transform(df) # df is the joined dataframes <cols, label>

你正在尋找union

在這種情況下,我要做的是將數據幀放在一個list並使用reduce

from functools import reduce

dataframes = [df_1, df_2, df_3, df_4]

result = reduce(lambda first, second: first.union(second), dataframes)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM