简体   繁体   中英

Spark Scala, merging two columnar dataframes duplicating the second dataframe each time

I want to merge 2 columns or 2 dataframes like df1

+--+
|id|
+--+
|1 |
|2 |
|3 |
+--+

df2 --> this one can be a list as well

+--+
|m |
+--+
|A |
|B |
|C |
+--+

I want to have as resulting table

+--+--+
|id|m |
+--+--+
|1 |A |
|1 |B |
|1 |C |
|2 |A |
|2 |B |
|2 |C |
|3 |A |
|3 |B |
|3 |C |
+--+--+

def crossJoin(right: org.apache.spark.sql.Dataset[_]): org.apache.spark.sql.DataFrame

Using crossJoin function you can get same result. Please check code below.

scala> dfa.show
+---+
| id|
+---+
|  1|
|  2|
|  3|
+---+


scala> dfb.show
+---+
|  m|
+---+
|  A|
|  B|
|  C|
+---+


scala> dfa.crossJoin(dfb).orderBy($"id".asc).show(false)
+---+---+
|id |m  |
+---+---+
|1  |B  |
|1  |A  |
|1  |C  |
|2  |A  |
|2  |B  |
|2  |C  |
|3  |C  |
|3  |B  |
|3  |A  |
+---+---+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM