[英]spark Sql query for selecting values from two columns if second column value is present in first column
Input- 输入-
col_a col_b
A B
D B
B E
C A
I am trying to get output in following way using sparksql but I am unable to get desired output using NOT EXITS/Left Outer join. 我正在尝试使用sparksql以以下方式获取输出,但是我无法使用NOT EXITS / Left Outer join获得所需的输出。 Please help me to get following output. 请帮助我获得以下输出。
col_a col_b
A B
D B
C A
I want to get values from both table if col_b value is present in col_a. 如果col_a中存在col_b值,我想从两个表中获取值。
Supposing that your columns aren't too large, I would do something like this: 假设您的列不太大,我将执行以下操作:
scala> val df = Seq(("A", "B"), ("D", "B"), ("B", "E"), ("C", "A")).toDF("col_a", "col_b")
df: org.apache.spark.sql.DataFrame = [col_a: string, col_b: string]
scala> df.show
+-----+-----+
|col_a|col_b|
+-----+-----+
| A| B|
| D| B|
| B| E|
| C| A|
+-----+-----+
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
scala> import scala.collection.mutable.HashSet
import scala.collection.mutable.HashSet
scala> val col_a_vals = df.rdd.map{case Row(a: String, b: String) => a}.collect.toSeq
col_a_vals: Seq[String] = WrappedArray(A, D, B, C)
scala> val col_a_set = HashSet(col_a_vals :_*)
col_a_set: scala.collection.mutable.HashSet[String] = Set(B, C, D, A)
scala> val broad_set = sc.broadcast(col_a_set)
broad_set: org.apache.spark.broadcast.Broadcast[scala.collection.mutable.HashSet[String]] = Broadcast(56)
scala> val contains_col_a = udf((value: String) => broad_set.value.contains(value))
contains_col_a: org.apache.spark.sql.UserDefinedFunction = UserDefinedFunction(<function1>,BooleanType,List(StringType))
scala> df.filter(contains_col_a($"col_b")).show
+-----+-----+
|col_a|col_b|
+-----+-----+
| A| B|
| D| B|
| C| A|
+-----+-----+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.