![](/img/trans.png)
[英]Apache Spark Scala : groupbykey maintains order of values in input RDD or not
[英]spark rdd filter after groupbykey
//create RDD
val rdd = sc.makeRDD(List(("a", (1, "m")), ("b", (1, "m")),
("a", (1, "n")), ("b", (2, "n")), ("c", (1, "m")),
("c", (5, "m")), ("d", (1, "m")), ("d", (1, "n"))))
val groupRDD = rdd.groupByKey()
在 groupByKey 之后我想过滤第二个元素不等于 1 并得到
("b", (1, "m")),("b", (2, "n")), ("c", (1, "m")), ("c", (5, "m"))
groupByKey()是必须的,可以帮助我,非常感谢。
添加:但是如果第二个元素类型是string,过滤第二个元素 全部等于x
,like ("a",("x","m")), ("a",("x","n")), ("b",("x","m")), ("b",("y","n")), ("c",("x","m")), ("c",("z","m")), ("d",("x","m")), ("d",("x","n"))
并得到相同的结果("b",("x","m")), ("b",("y","n")), ("c",("x","m")), ("c",("z","m"))
你可以这样做:
val groupRDD = rdd
.groupByKey()
.filter(value => value._2.map(tuple => tuple._1).sum != value._2.size)
.flatMapValues(list => list) // to get the result as you like, because right now, they are, e.g. (b, Seq((1, m), (1, n)))
这是做什么的,我们首先通过groupByKey
对键进行分组,然后我们通过对分组条目中的键求和来过滤filter
,并检查sum
是否与分组条目的大小一样多。 例如:
(a, Seq((1, m), (1, n)) -> grouped by key
(a, Seq((1, m), (1, n), 2 (the sum of 1 + 1), 2 (size of sequence))
2 = 2, filter this row out
最终结果:
(c,(1,m))
(b,(1,m))
(c,(5,m))
(b,(2,n))
祝你好运!
编辑
假设元组中的key
可以是任何字符串; 假设rdd
是您的数据,其中包含:
(a,(x,m))
(c,(x,m))
(c,(z,m))
(d,(x,m))
(b,(x,m))
(a,(x,n))
(d,(x,n))
(b,(y,n))
然后我们可以构造uniqueCount
为:
val uniqueCount = rdd
// we swap places, we want to check for combination of (a, 1), (b, a), (b, b), (c, a), etc.
.map(entry => ((entry._1, entry._2._1), entry._2._2))
// we count keys, meaning that (a, 1) gives us 2, (b, a) gives us 1, (b, b) gives us 1, etc.
.countByKey()
// we filter out > 2, because they are duplicates
.filter(a => a._2 == 1)
// we get the very keys, so we can filter below
.map(a => a._1._1)
.toList
然后这个:
val filteredRDD = rdd.filter(a => uniqueCount.contains(a._1))
给出这个 output:
(b,(y,n))
(c,(x,m))
(c,(z,m))
(b,(x,m))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.