简体   繁体   中英

How to divide dataset in two parts based on filter in Spark-scala

Is it possible to divide DF in two parts using single filter operation.For example

let say df has below records

UID    Col
 1       a
 2       b
 3       c

if I do

df1 = df.filter(UID <=> 2)

can I save filtered and non-filtered records in different RDD in single operation ?

 df1 can have records where uid = 2
 df2 can have records with uid 1 and 3 

If you're interested only in saving data you can add an indicator column to the DataFrame :

val df = Seq((1, "a"), (2, "b"), (3, "c")).toDF("uid", "col")
val dfWithInd = df.withColumn("ind", $"uid" <=> 2)

and use it as a partition column for the DataFrameWriter with one of the supported formats (as for 1.6 it is Parquet, text, and JSON):

dfWithInd.write.partitionBy("ind").parquet(...)

It will create two separate directories ( ind=false , ind=true ) on write.

In general though, it is not possible to yield multiple RDDs or DataFrames from a single transformation. See How to split a RDD into two or more RDDs?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM