繁体   English   中英

Apache Spark的Some过滤器

[英]Apache Spark filter by Some

我有以下leftOuterJoin操作:

val totalsAndProds = transByProd.leftOuterJoin(products)
println(totalsAndProds.first())

打印:

(19,([Ljava.lang.String;@261ea657,Some([Ljava.lang.String;@25290bca)))

然后我尝试应用以下filter操作:

totalsAndProds.filter(x => x._2 == Some).first

但失败,但以下异常:

Exception in thread "main" java.lang.UnsupportedOperationException: empty collection
    at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1380)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.first(RDD.scala:1377)
    at com.example.spark.WordCount$.main(WordCount.scala:98)
    at com.example.spark.WordCount.main(WordCount.scala)

我在做什么错,筛选器操作返回空集合?

您的谓词是错误的:

  1. 您的RDD类型是(Int, (Array[String], Option[Array[String]])) ,因此_._2(Array[String], Option[Array[String]]) ,而不是Option[Array[String]]
  2. 您不使用等于检查选项类型。

尝试

totalsAndProds.filter{ case (_, (_, s)) => s.isDefined }

下面的例子:

scala> val rdd = sc.parallelize(List((19, (Array("a"), Some(Array("a"))))))
rdd: org.apache.spark.rdd.RDD[(Int, (Array[String], Some[Array[String]]))] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.filter{ case (_, (_, s)) => s.isDefined }
res0: org.apache.spark.rdd.RDD[(Int, (Array[String], Some[Array[String]]))] = MapPartitionsRDD[1] at filter at <console>:27

scala> rdd.filter{ case (_, (_, s)) => s.isDefined }.collect
res1: Array[(Int, (Array[String], Some[Array[String]]))] = Array((19,(Array(a),Some([Ljava.lang.String;@5307fee))))

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM