繁体   English   中英

如何减少 Spark 中的 List[Key, List[Name, Value]]?

[英]How to reduce a List[Key, List[Name, Value]] in Spark?

这是我模型的结构

package object summary {
  case class NameValuePair(name: String, value: Long)

  case class Result(key: String, pairs: List[NameValuePair])

  case class Data(data: List[Result])
}

数据会像

[
Result("Paris", List[NameValuePair("apples",10),NameValuePair("oranges",20),NameValuePair("peaches",30)]),
Result("Paris", List[NameValuePair("apples",20),NameValuePair("oranges",30),NameValuePair("peaches",40)]),
Result("NY", List[NameValuePair("apples",20),NameValuePair("oranges",30),NameValuePair("peaches",40)]),
Result("NY", List[NameValuePair("apples",40),NameValuePair("oranges",30),NameValuePair("peaches",10)]),
Result("London", List[NameValuePair("apples",20),NameValuePair("oranges",30),NameValuePair("peaches",40)])
]

我想要像下面这样的输出

[
("Paris", [("apples", 30),("oranges", 50),("peaches",70)]),
("NY", [("apples", 60),("oranges", 60),("peaches",50)]),
("London", [("apples", 20),("oranges", 30),("peaches",40)])
]

我想根据城市找到水果数量的总和。 如何用火花做到这一点?

您可以通过使用 spark RDD 来实现这一点,如下所示:

我重新创建了您的数据以创建 RDD:

val data_test =
List(Result("Paris", List( new NameValuePair("apples",10),new NameValuePair("oranges",20), new NameValuePair("peaches",30))),
Result("Paris", List( new NameValuePair("apples",20), new NameValuePair("oranges",30),new NameValuePair("peaches",40))),
Result("NY", List(new NameValuePair("apples",20),new NameValuePair("oranges",30), new NameValuePair("peaches",40))),
Result("NY", List(new NameValuePair("apples",40), new NameValuePair("oranges",30), new NameValuePair("peaches",10))),
Result("London", List(new NameValuePair("apples",20),new NameValuePair("oranges",30),new NameValuePair("peaches",40))) )

然后我从 data_test 创建了 RDD 并对其应用了转换,这是代码:

val rdd_data = sc.parallelize(data_test)
val rdd_1 = rdd_data.map(x => ((x.key,x.pairs(0).name),x.pairs(0).value))
val rdd_2 = rdd_data.map(x => ((x.key,x.pairs(1).name),x.pairs(1).value))
val rdd_3 = rdd_data.map(x => ((x.key,x.pairs(2).name),x.pairs(2).value))
val rdd_final = rdd_1.union(rdd_2).union(rdd_3)
val rdd_reduce = rdd_final.reduceByKey((x,y) => x+y)
val rdd_transformed = rdd_reduce.map(x=>(x._1._1,(x._1._2,x._2))).groupByKey().map(x=>(x._1,x._2.toList))
rdd_transformed.foreach(println)

获得的结果如下所示:

(NY,List((peaches,50), (apples,60), (oranges,60)))
(London,List((apples,20), (peaches,40), (oranges,30)))
(Paris,List((oranges,50), (peaches,70), (apples,30)))

[评论后编辑] 如果对的数量不同,您可以定义如下函数:

def func(res : Result): List[((String,String),Long)] = {
    var r = List[((String,String),Long)]()
    var i = List[NameValuePair]()
    for(i <- res.pairs){
        val tt : ((String,String),Long)= ((res.key,i.name),i.value)
        r = tt :: r
    }
    return r
}

然后你可以直接跳到我上面生成 rdd_final 的那一行,如下所示:

val rdd_final = rdd_data.flatMap(x=>func(x))

然后以同样的方式执行其他指令。

我会使用按功能分组的数据框来做到这一点。 像这样:

import spark.implicits._
Seq(
  Result("Paris", List( new NameValuePair("apples",10),new NameValuePair("oranges",20), new NameValuePair("peaches",30))),
  Result("Paris", List( new NameValuePair("apples",20), new NameValuePair("oranges",30),new NameValuePair("peaches",40))),
  Result("NY", List(new NameValuePair("apples",20),new NameValuePair("oranges",30), new NameValuePair("peaches",40))),
  Result("NY", List(new NameValuePair("apples",40), new NameValuePair("oranges",30), new NameValuePair("peaches",10))),
  Result("London", List(new NameValuePair("apples",20),new NameValuePair("oranges",30),new NameValuePair("peaches",40)))
).flatMap(row => {
  val city = row.key
  val fruits = row.pairs
  fruits.map(f => {
    val fruitName = f.name
    val v = f.value
    (city, fruitName, v)
  })
}).toDF("city", "fruit", "value")
  .groupBy("city").sum().show()
//The result would be:
+------+----------+
|  city|sum(value)|
+------+----------+
|London|        90|
| Paris|       150|
|    NY|       170|
+------+----------+

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM