[英]Adding vectors present in two different RDDs scala spark
I have two RDDs with this structure 我有两个这种结构的RDD
org.apache.spark.rdd.RDD[(Long, org.apache.spark.mllib.linalg.Vector)]
Here each row of RDD contains an index Long
and a vector org.apache.spark.mllib.linalg.Vector
. 这里RDD的每一行都包含一个索引Long
和一个矢量org.apache.spark.mllib.linalg.Vector
。 I want to add each component of the Vector
into the corresponding component of other Vector
present in a row of other RDD. 我想将Vector
每个组件添加到其他RDD行中存在的其他Vector
的相应组件中。 Each vector of first RDD should be added to each vector of other RDD. 应将第一个RDD的每个向量添加到其他RDD的每个向量中。
An example would look like this: 一个例子看起来像这样:
RDD1: RDD1集:
Array[(Long, org.apache.spark.mllib.linalg.Vector)] =
Array((0,[0.1,0.2]),(1,[0.3,0.4]))
RDD2: RDD2:
Array[(Long, org.apache.spark.mllib.linalg.Vector)] =
Array((0,[0.3,0.8]),(1,[0.2,0.7]))
Result: 结果:
Array[(Long, org.apache.spark.mllib.linalg.Vector)] =
Array((0,[0.4,1.0]),(0,[0.3,0.9]),(1,[0.6,1.2]),(1,[0.5,1.1]))
Please consider the same situation using List instead of Array. 请使用List而不是Array来考虑相同的情况。
Here is my solution: 这是我的解决方案:
val l1 = List((0,List(0.1,0.2)),(1,List(0.1,0.2)))
val l2 = List((0,List(0.3,0.8)),(1,List(0.2,0.7)))
var sms = (l1 zip l2).map{ case (m, a) => (m._1, (m._2, a._2).zipped.map(_+_))}
Let's experiment with Array :) 让我们试试数组:)
Instead of driver code you can do all this in transformation . 而不是驱动程序代码,您可以在转换中完成所有这些。 This will be helpful if you have large rdds. 如果你有大的rdds,这将有所帮助。 This will perform less shuffling too. 这也将减少洗牌次数。
val a:RDD[(Long, org.apache.spark.mllib.linalg.Vector)]= sc.parallelize(Array((0l,Vectors.dense(0.1,0.2)),(1l,Vectors.dense(0.3,0.4))))
val b:RDD[(Long, org.apache.spark.mllib.linalg.Vector)]= sc.parallelize(Array((0l,Vectors.dense(0.3,0.8)),(1l,Vectors.dense(0.2,0.7))))
val ab= a join b
val result=ab.map(x => (x._1,Vectors.dense(x._2._1.apply(0)+x._2._2.apply(0),x._2._1.apply(1)+x._2._2.apply(1))))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.