简体   繁体   中英

Spark Scala Delete rows in one RDD based on columns of another RDD

I'm very new to scala and spark and not sure how to start.

I have one RDD that looks like this:

1,2,3,11
2,1,4,12
1,4,5,13
3,5,6,12

Another that looks like this:

2,1
1,2

I want to filter the first RDD such that it will delete any rows that are matching the first two columns of the second RDD. The output should look like:

 1,4,5,13
 3,5,6,12
// input rdds
val rdd1 = spark.sparkContext.makeRDD(Seq((1,2,3,11), (2,1,3,12), (1,4,5,13), (3,5,6,12)))
val rdd2 = spark.sparkContext.makeRDD(Seq((1,2), (2,1)))

// manipulate the 2 rdds as a key, val pair
// the key of the first rdd is a tuple pair of first two fields, the val contains all the fields
// the key of the second rdd is a tuple of first two fields, the val is just null
// then we could perform joins on their key
val rdd1_key = rdd1.map(record => ((record._1, record._2), record))
val rdd2_key = rdd2.map(record => (record, null))

// 1. perform left outer join, the record become (key, (val1, val2))
// 2. filter, keep those records which do not have a join
// if there is no join, val2 will be None, otherwise val2 will be null, which is the value we hardcoded from previous step
// 3. get val1 
rdd1_key.leftOuterJoin(rdd2_key)
  .filter(record => record._2._2 == None)
  .map(record => record._2._1)
  .collect().foreach(println(_))

// result
(1,4,5,13)
(3,5,6,12)

Thanks

I personally prefer dataframe/dataset way as they are optimized forms of rdd and with more inbuilt functions and similar to traditional databases.

following is the dataframe way:

First step would be to convert both of the rdds to dataframes

import sqlContext.implicits._
val df1 = rdd1.toDF("col1", "col2", "col3", "col4")
val df2 = rdd2.toDF("col1", "col2")

Second step would be to add a new column in dataframe2 for filtering condition checking

import org.apache.spark.sql.functions._
val tempdf2 = df2.withColumn("check", lit("check"))

And final step would be to join the two dataframes , filter and drop the unnecessary rows and columns .

val finalDF = df1.join(tempdf2, Seq("col1", "col2"), "left")
                          .filter($"check".isNull)
                          .drop($"check")

You should have final dataframe as

+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
|3   |5   |6   |12  |
|1   |4   |5   |13  |
+----+----+----+----+

Now you can either convert to rdd using finalDF.rdd or you can continue your further processing with dataframe itself.

I hope the answer is helpful

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM