简体   繁体   English

Spark:在数组类型列上连接两个数据框

[英]Spark: Join two dataframes on an array type column

I have a simple use case I have two dataframes df1 and df2, and I am looking for an efficient way to join them?我有一个简单的用例,我有两个数据框 df1 和 df2,我正在寻找一种有效的方式来加入它们?

df1: Contains my main dataframe (billions of records) df1:包含我的主要 dataframe(数十亿条记录)

+--------+-----------+--------------+
|doc_id  |doc_name   |doc_type_id   |
+--------+-----------+--------------+
|   1    |doc_name_1 |[1,4]         |
|   2    |doc_name_2 |[3,2,6]       |
+--------+-----------+--------------+

df2: Contains labels of doc types(40000 records), as it's a small one I am broadcasting it. df2:包含文档类型的标签(40000 条记录),因为它很小,我正在广播它。

+------------+----------------+
|doc_type_id |doc_type_name   |
+------------+----------------+
|   1        |doc_type_1      |
|   2        |doc_type_2      |
|   3        |doc_type_3      |
|   4        |doc_type_4      |
|   5        |doc_type_5      |
|   6        |doc_type_5      |
+------------+----------------+

I would like to join these two dataframes to result in somthing like this:我想加入这两个数据框以产生这样的结果:

+--------+------------+--------------+----------------------------------------+
|doc_id  |doc_name    |doc_type_id   |doc_type_name                           |
+--------+------------+--------------+----------------------------------------+
|   1    |doc_name_1  |[1,4]         |["doc_type_1","doc_type_4"]             |
|   2    |doc_name_2  |[3,2,6]       |["doc_type_3","doc_type_2","doc_type_6"]|
+--------+------------+--------------+----------------------------------------+

Thanks谢谢

We can use array_contains + groupBy + collect_list functions for this case.对于这种情况,我们可以使用array_contains + groupBy + collect_list函数。

Example:例子:

val df1=Seq(("1","doc_name_1",Seq(1,4)),("2","doc_name_2",Seq(3,2,6))).toDF("doc_id","doc_name","doc_type_id")

val df2=Seq(("1","doc_type_1"),("2","doc_type_2"),("3","doc_type_3"),("4","doc_type_4"),("5","doc_type_5"),("6","doc_type_6")).toDF("doc_type_id","doc_type_name")

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._

df1.createOrReplaceTempView("tbl")
df2.createOrReplaceTempView("tbl2")

spark.sql("select a.doc_id,a.doc_name,a.doc_type_id,collect_list(b.doc_type_name) doc_type_name from tbl a join tbl2 b on array_contains(a.doc_type_id,int(b.doc_type_id)) = TRUE group by a.doc_id,a.doc_name,a.doc_type_id").show(false)

//+------+----------+-----------+------------------------------------+
//|doc_id|doc_name  |doc_type_id|doc_type_name                       |
//+------+----------+-----------+------------------------------------+
//|2     |doc_name_2|[3, 2, 6]  |[doc_type_2, doc_type_3, doc_type_6]|
//|1     |doc_name_1|[1, 4]     |[doc_type_1, doc_type_4]            |
//+------+----------+-----------+------------------------------------+

Other way to achieve is by using explode + join + collect_list :其他实现方法是使用explode + join + collect_list

val df3=df1.withColumn("arr",explode(col("doc_type_id")))

df3.join(df2,df2.col("doc_type_id") === df3.col("arr"),"inner").
groupBy(df3.col("doc_id"),df3.col("doc_type_id"),df3.col("doc_name")).
agg(collect_list(df2.col("doc_type_name")).alias("doc_type_name")).
show(false)

//+------+-----------+----------+------------------------------------+
//|doc_id|doc_type_id|doc_name  |doc_type_name                       |
//+------+-----------+----------+------------------------------------+
//|1     |[1, 4]     |doc_name_1|[doc_type_1, doc_type_4]            |
//|2     |[3, 2, 6]  |doc_name_2|[doc_type_2, doc_type_3, doc_type_6]|
//+------+-----------+----------+------------------------------------+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM