[英]Scala Spark, compare two DataFrames and select the value of another column
[英]How can we compare two dataframes in spark scala to find difference between these 2 files, which column ?? and value?
我有两个文件,我从中创建了两个数据框 prod1 和 prod2。我需要找到列名和值在两个 dfs 中都不匹配的记录。 id_sk 是主键。所有列都是字符串数据类型
数据框 1 (prod1)
id_sk | uuid|name
1 |10 |a
2 |20 |b
3 |30 |c
数据框 2 (prod2)
id_sk | uuid|name
2 |20 |b-upd
3 |30-up|c
4 |40 |d
所以我需要以下格式的结果数据框。
id|col_name|values
2 |name |b,b-upd
3 |uuid |30,30-up
我做了一个内部连接并比较了不匹配的记录。
我得到的结果如下:
id_sk | uuid_prod1|uid_prod2|name_prod1|name_prod2
2 |20 |20 |b |b-upd
3 |30 |30-up |c |c
val commmon_rec = prod1.join(prod2,prod1("id_sk")===prod2("id_sk"),"inner").select(prod1("id_sk").alias("id_sk_prod1"),prod1("uuid").alias("uuid_prod1"),prod1("name").alias("name_prod1"),prod1("name").alias("name_prod2")
val compare = spark.sql("select ...from common_rec where col_prod1<>col_prod2")
这是一个可能的解决方案:
//to create a joined DF with column "col_name"
//if columns "name" and "uuid" contains different values:
var output = df1.join(df2, df1.col("id_sk")===df2.col("id_sk"))
.where(df1.col("name")=!=df2.col("name") || df1.col("uuid")=!=df2.col("uuid"))
.withColumn("col_name", when(df1.col("name")=!=df2.col("name"), "name")
.otherwise(when(df1.col("uuid")=!=df2.col("uuid"), "uuid")))
//to create the new "col_values" column
//containing concatenated values:
output = output.withColumn("col_values", when(output.col("col_name")==="name", when(df1.col("name")=!=df2.col("name"), concat_ws(",", df1.col("name"), df2.col("name")) ))
.when(output.col("col_name")==="uuid", when(df1.col("uuid")=!=df2.col("uuid"), concat_ws(",", df1.col("uuid"), df2.col("uuid")) )))
output = output.select(df1.col("id_sk"), output.col("col_name"), output.col("col_values"))
+-----+--------+----------+
|id_sk|col_name|col_values|
+-----+--------+----------+
| 2| name| b,b-up|
| 3| uuid| 30,30-up|
+-----+--------+----------+
请注意,我认为这不是最好的解决方案,而只是一个起点(例如,如果一行有多个不同的列值怎么办?)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.