简体   繁体   English

如何使用Scala在Spark Dataframes中比较两列数据

[英]How to compare two columns data in Spark Dataframes using Scala

I want to compare two columns in a Spark DataFrame : if the value of a column ( attr_value ) is found in values of another ( attr_valuelist ) I want only that value to be kept. 我想比较Spark DataFrame两列:如果在另一个( attr_valuelist )的值中找到一列( attr_value )的值,我只希望保留该值。 Otherwise, the column value should be null . 否则,列值应为null

For example, given the following input 例如,给定以下输入

id1 id2   attrname  attr_value   attr_valuelist
1   2     test      Yes          Yes, No
2   1     test1     No           Yes, No
3   2     test2     value1       val1, Value1,value2

I would expect the following output 我期望以下输出

id1 id2   attrname  attr_value   attr_valuelist
1   2     test      Yes          Yes
2   1     test1     No           No
3   2     test2     value1       Value1

can you try this code. 你可以试试这个代码。 I think it will work with that SQL contains case when. 我认为它将与包含时的SQL一起使用。

val emptyRDD = sc.emptyRDD[Row] 

var emptyDataframe = sqlContext.createDataFrame(emptyRDD, your_dataframe.schema)

your_dataframe.createOrReplaceTempView("tbl")  

emptyDataframe = sqlContext.sql("select id1, id2, attrname, attr_value, case when
attr_valuelist like concat('%', attr_value, '%') then attr_value else
null end as attr_valuelist from tbl") 

emptyDataframe.show

I assume, given your sample input, that the column with the search item contains a string while the search target is a sequence of strings. 我假定给定您的示例输入,包含搜索项的列包含一个字符串,而搜索目标是一个字符串序列。 Also, I assume you're interested in case-insensitive search. 另外,我假设您对不区分大小写的搜索感兴趣。

This is going to be the input (I added a column that would have yielded a null to test the behavior of the UDF I wrote): 这将作为输入(我添加了一个会产生null以测试我编写的UDF行为的列):

+---+---+--------+----------+----------------------+
|id1|id2|attrname|attr_value|attr_valuelist        |
+---+---+--------+----------+----------------------+
|1  |2  |test    |Yes       |[Yes, No]             |
|2  |1  |test1   |No        |[Yes, No]             |
|3  |2  |test2   |value1    |[val1, Value1, value2]|
|3  |2  |test2   |value1    |[val1, value2]        |
+---+---+--------+----------+----------------------+

You can solve your problem with a very simple UDF. 您可以使用非常简单的UDF解决您的问题。

val find = udf {
  (item: String, collection: Seq[String]) =>
    collection.find(_.toLowerCase == item.toLowerCase)
}

val df = spark.createDataFrame(Seq(
  (1, 2, "test", "Yes", Seq("Yes", "No")),
  (2, 1, "test1", "No", Seq("Yes", "No")),
  (3, 2, "test2", "value1", Seq("val1", "Value1", "value2")),
  (3, 2, "test2", "value1", Seq("val1", "value2"))
)).toDF("id1", "id2", "attrname", "attr_value", "attr_valuelist")

df.select(
  $"id1", $"id2", $"attrname", $"attr_value",
  find($"attr_value", $"attr_valuelist") as "attr_valuelist")

show ing the output of the last command would yield the following output: show最后一条命令的输出将产生以下输出:

+---+---+--------+----------+--------------+
|id1|id2|attrname|attr_value|attr_valuelist|
+---+---+--------+----------+--------------+
|  1|  2|    test|       Yes|           Yes|
|  2|  1|   test1|        No|            No|
|  3|  2|   test2|    value1|        Value1|
|  3|  2|   test2|    value1|          null|
+---+---+--------+----------+--------------+

You can execute this code in any spark-shell . 您可以在任何spark-shell执行此代码。 If you are using this from a job you are submitting to a cluster, remember to import spark.implicits._ . 如果要在要提交给集群的作业中使用它,请记住import spark.implicits._

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM