简体   繁体   中英

Convert RDD from type `org.apache.spark.rdd.RDD[((String, String), Double)]` to `org.apache.spark.rdd.RDD[((String), List[Double])]`

I've an RDD :

  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      )) 

I'm attempting to convert this RDD from type org.apache.spark.rdd.RDD[((String, String), Double)] to org.apache.spark.rdd.RDD[((String), List[Double])]

Each key in the RDD should be unique and its values sorted.

So above rdd structure will be converted to :

val newRdd : [((String), List[Double])] = RDD("a" , List(1,2,3))

To get a unique listing of keys I use :

val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))

How can I convert each key to contain a List of sorted Doubles ?

Entire code :

import org.apache.spark.SparkContext;

object group {
  println("Welcome to the Scala worksheet")       //> Welcome to the Scala worksheet

  val conf = new org.apache.spark.SparkConf()
    .setMaster("local")
    .setAppName("distances")
    .setSparkHome("C:\\spark-1.1.0-bin-hadoop2.4\\spark-1.1.0-bin-hadoop2.4")
    .set("spark.executor.memory", "1g")           //> conf  : org.apache.spark.SparkConf = org.apache.spark.SparkConf@1bd0dd4

  val sc = new SparkContext(conf)                 //> 14/12/16 16:44:56 INFO spark.SecurityManager: Changing view acls to: a511381
                                                  //| ,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: Changing modify acls to: a5113
                                                  //| 81,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: SecurityManager: authenticatio
                                                  //| n disabled; ui acls disabled; users with view permissions: Set(a511381, ); u
                                                  //| sers with modify permissions: Set(a511381, )
                                                  //| 14/12/16 16:44:57 INFO slf4j.Slf4jLogger: Slf4jLogger started
                                                  //| 14/12/16 16:44:57 INFO Remoting: Starting remoting
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting started; listening on addresses :[
                                                  //| akka.tcp://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting now listens on addresses: [akka.tc
                                                  //| p://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO util.Utils: Successfully started service 'sparkDriver
                                                  //| ' on port 51092.
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv: Registering MapOutputTracker
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv:
                                                  //| Output exceeds cutoff limit.

  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      ))                                          //> rdd  : org.apache.spark.rdd.RDD[((String, String), Double)] = ParallelCollec
                                                  //| tionRDD[0] at parallelize at group.scala:15

     val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))
                                                  //> r2  : org.apache.spark.rdd.RDD[(String, Double)] = MappedRDD[1] at map at gr
                                                  //| oup.scala:21

     val m1 = r2.collect                          //> 14/12/16 16:44:59 INFO spark.SparkContext: Starting job: collect at group.sc
                                                  //| ala:23
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Got job 0 (collect at group.s
                                                  //| cala:23) with 1 output partitions (allowLocal=false)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect 
                                                  //| at group.scala:23)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Parents of final stage: List(
                                                  //| )
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Missing parents: List()
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD
                                                  //| [1] at map at group.scala:21), which has no missing parents
                                                  //| 14/12/16 16:44:59 WARN util.SizeEstimator: Failed to check whether UseCompre
                                                  //| ssedOops is set; assuming yes
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: ensureFreeSpace(1584) called wit
                                                  //| h curMem=0, maxMem=140142182
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: Block broadcast_0 stored as valu
                                                  //| es in memory (estimated size 1584.0 B
                                                  //| Output exceeds cutoff limit.
     m1.foreach { case (e, i) => println(e + "," + i) }
                                                  //> a,1.0
                                                  //| a,3.0
                                                  //| a,2.0


}

Hi with the @Imm solution your values will not be sorted, if it happen will be a casualty. To get a sorted list you only have to add:

val r4 = r3.mapValues(_.toList.sorted) So r4 will have a rdd which each value list will be sorted for each key

i hope this will be useful

Use groupByKey :

val r3: RDD[String, Iterable[Double]] = r2.groupByKey

If you really want the second element to be a List rather than a general Iterable then you can use mapValues

val r4 = r3.mapValues(_.toList)

Make sure you import org.apache.spark.SparkContext._ at the top so these functions are available.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM