I have the following leftOuterJoin
operation:
val totalsAndProds = transByProd.leftOuterJoin(products)
println(totalsAndProds.first())
which prints:
(19,([Ljava.lang.String;@261ea657,Some([Ljava.lang.String;@25290bca)))
then I try to apply the following filter
operations:
totalsAndProds.filter(x => x._2 == Some).first
but it fails with the following exception:
Exception in thread "main" java.lang.UnsupportedOperationException: empty collection
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1380)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.first(RDD.scala:1377)
at com.example.spark.WordCount$.main(WordCount.scala:98)
at com.example.spark.WordCount.main(WordCount.scala)
what am I doing wrong and the filter operation returns the empty collection?
Your predicate is wrong:
(Int, (Array[String], Option[Array[String]]))
, therefore _._2
is of type (Array[String], Option[Array[String]])
, not Option[Array[String]]
Try
totalsAndProds.filter{ case (_, (_, s)) => s.isDefined }
Example below:
scala> val rdd = sc.parallelize(List((19, (Array("a"), Some(Array("a"))))))
rdd: org.apache.spark.rdd.RDD[(Int, (Array[String], Some[Array[String]]))] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> rdd.filter{ case (_, (_, s)) => s.isDefined }
res0: org.apache.spark.rdd.RDD[(Int, (Array[String], Some[Array[String]]))] = MapPartitionsRDD[1] at filter at <console>:27
scala> rdd.filter{ case (_, (_, s)) => s.isDefined }.collect
res1: Array[(Int, (Array[String], Some[Array[String]]))] = Array((19,(Array(a),Some([Ljava.lang.String;@5307fee))))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.