简体   繁体   中英

Spark DataFrame: operate on groups

I've got a DataFrame I'm operating on, and I want to group by a set of columns and operate per-group on the rest of the columns. In regular RDD -land I think it would look something like this:

rdd.map( tup => ((tup._1, tup._2, tup._3), tup) ).
  groupByKey().
  forEachPartition( iter => doSomeJob(iter) )

In DataFrame -land I'd start like this:

df.groupBy("col1", "col2", "col3")  // Reference by name

but then I'm not sure how to operate on the groups if my operations are more complicated than the mean/min/max/count offered by GroupedData .

For example, I want to build a single MongoDB document per ("col1", "col2", "col3") group (by iterating through the associated Row s in the group), scale down to N partitions, then insert the docs into a MongoDB database. The N limit is the max number of simultaneous connections I want.

Any advice?

You can do a self-join. First get the groups:

val groups = df.groupBy($"col1", $"col2", $"col3").agg($"col1", $"col2", $"col3")

Then you can join this back to the original DataFrame:

val joinedDF = groups
  .select($"col1" as "l_col1", $"col2" as "l_col2", $"col3" as "l_col3)
  .join(df, $"col1" <=> $"l_col1" and $"col2" <=> $"l_col2" and  $"col3" <=> $"l_col3")

While this gets you exactly the same data you had originally (and with 3 additional, redundant columns) you could do another join to add a column with the MongoDB document ID for the (col1, col2, col3) group associated with the row.

At any rate, in my experience joins and self-joins are the way you handle complicated stuff in DataFrames.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM